Carl Sagan, nuking the moon, and not nuking the moon

In 1957, Nobel laureate microbiologist Joshua Lederberg and biostatician J. B. S. Haldane sat down together imagined what would happened if the USSR decided to explode a nuclear weapon on the moon.

The Cold War was on, Sputnik had recently been launched, and the 40th anniversary of the Bolshevik Revolution was coming up – a good time for an awe-inspiring political statement. Maybe they read a recent United Press article about the rumored USSR plans. Nuking the moon would make a powerful political statement on earth, but the radiation and disruption could permanently harm scientific research on the moon.

What Lederberg and Haldane did not know was that they were onto something – by the next year, the USSR really investigated the possibility of dropping a nuke on the moon. They called it “Project E-4,” one of a series of possible lunar missions.

What Lederberg and Haldane definitely did not know was that that same next year, 1958, the US would also study the idea of nuking the moon. They called it “Project A119” and the Air Force commissioned research on it from Leonard Reiffel, a regular military collaborator and physicist at the University of Illinois. He worked with several other scientists, including a University of Chicago grad student named Carl Sagan.

“Why would anyone think it was a good idea to nuke the moon?”

That’s a great question. Most of us go about our lives comforted by the thought “I would never drop a nuclear weapon on the moon.” The truth is that given a lot of power, a nuclear weapon, and a lot of extremely specific circumstances, we too might find ourselves thinking “I should nuke the moon.”

Reasons to nuke the moon

During the Cold War, dropping a nuclear weapon on the moon would show that you had the rocketry needed to aim a nuclear weapon precisely at long distances. It would show off your spacefaring capability. A visible show could reassure your own side and frighten your enemies.

It could do the same things for public opinion that putting a man on the moon ultimately did. But it’s easier and cheaper:

  • As of the dawn of ICBMs you already have long-distance rockets designed to hold nuclear weapons
  • Nuclear weapons do not require “breathable atmosphere” or “water”
  • You do not have to bring the nuclear weapon safely back from the moon.

There’s not a lot of English-language information online about the USSR E-4 program to nuke the moon. The main reason they cite is wanting to prove that USSR rockets could hit the moon.3 The nuclear weapon attached wasn’t even the main point! That explosion would just be the convenient visual proof.

They probably had more reasons, or at least more nuance to that one reason – again, there’s not a lot of information accessible to me.* We have more information on the US plan, which was declassified in 1990, and probably some of the motivations for the US plan were also considered by the USSR for theirs.

  • Military
  • Scare USSR
  • Demonstrate nuclear deterrent1
    • Results would be educational for doing space warfare in the future2
  • Political
    • Reassure US people of US space capabilities (which were in doubt after the USSR launched Sputnik)
      • More specifically, that we have a nuclear deterrent1
    • “A demonstration of advanced technological capability”2
  • Scientific (they were going to send up batteries of instruments somewhat before the nuking, stationed at distances from the nuke site)
    • Determine thermal conductivity from measuring rate of cooling (post-nuking) (especially of below-dust moon material)
    • Understand moon seismology better via via seismograph-type readings from various points at distance from the explosion
      • And especially get some sense of the physical properties of the core of the moon2
MANY PROBLEMS, ONE SOLUTION: BLOW UP THE MOON
As stated by this now-unavailable A Softer World merch shirt design. Hey, Joey Comeau and Emily Horne, if you read this, bring back this t-shirt! I will buy it.

Reasons to not nuke the moon

In the USSR, Aleksandr Zheleznyakov, a Russian rocket engineer, explained some reasons the USSR did not go forward with their project:

  • Nuke might miss the moon
    • and fall back to earth, where it would detonate, because of the planned design which would explode upon impact
      • in the USSR
      • in the non-USSR (causing international incident)
    • and circle sadly around the sun forever
  • You would have to tell foreign observatories to watch the moon at a specific time and place
    • And… they didn’t know how to diplomatically do that? Or how to contact them?

The US has less information. While they were not necessarily using the same sea-mine style detonation system that the planned USSR moon-nuke would have3, they were still concerned about a failed launch resulting in not just a loose rocket but a loose nuclear weapon crashing to earth.2

(I mean, not that that’s never happened before.)

Even in his commissioned report exploring the feasibility, Leonard Reiffel and his team clearly did not want to nuke the moon. They outline several reasons this would be bad news for science:

  • Environmental disturbances
  • Permanently disrupting possible organisms and ecosystems
    • In maybe the strongest language in the piece, they describe this as “an unparalleled scientific disaster”
  • Radiological contamination
    • There are some interesting things to be done with detecting subtle moon radiation – effects of cosmic rays hitting it, detecting a magnetosphere, various things like the age of the moon. Nuking the moon would easily spread radiation all over it. It wouldn’t ruin our ability to study this, especially if we had some baseline instrument readings up there first, but it wouldn’t help either.
  • To achieve the scientific objective of understanding moon seismology, we could also just put detectors on the moon and wait. If we needed more force, we could just hit the moon with rockets, or wait for meteor impacts.

I would also like to posit that nuking the moon is kind of an “are we the baddies?” moment, and maybe someone realized that somewhere in there.

Please don't do that :(

Afterwards

That afternoon they imagined the USSR nuking the moon, Lederberg and Haldane ran the numbers and guessed that a nuclear explosion on the moon would be visible from earth. So the USSR’s incentive was there. They couldn’t do much about that but they figured this would be politically feasible, and that this was frightening because such a contamination would disrupt and scatter debris all over the unexplored surface of the moon – the closest and richest site for space research, a whole mini-planet of celestial material that had not passed through the destructive gauntlet of earth’s atmosphere (as meteors do, the force of reentry blasting away temperature-sensitive and delicate structures).

Lederberg couldn’t stop the USSR from nuking the moon. But early in the space age, he began lobbying for avoiding contaminating outer space. He pushed for a research-based approach and international cooperation, back when cooperating with the USSR was not generally on the table. His interest and scientific clout lead colleagues to take this seriously. We still do this – we still sanitize outgoing spacecraft so that hardy Earth organisms will (hopefully) not colonize other planets.

A rocket taking earth organisms into outer space is forward contamination.

Lederberg then took some further steps and realized that if there was a chance Earth organisms could disrupt or colonize Moon life, there was a smaller but deadlier chance that Moon organisms could disrupt or colonize Earth life.

A rocket carrying alien organisms from other planets to earth is back contamination.

He realized that in returning space material to earth, we should proceed very, very cautiously until we can prove that it is lifeless. His efforts were instrumental in causing the Apollo program to have an extensive biosecurity and contamination-reduction program. That program is its own absolutely fascinating story.

Early on, a promising young astrophysicist joined Lederberg in A) pioneering the field of astrobiology and B) raising awareness of space contamination – former A119 contributor and future space advocate Carl Sagan.

Here’s what I think happened: a PhD student fascinated with space works on secret project that he’d worked on with his PhD advisor on nuking the moon. He assists with this work, finding it plausible, and is horrified for the future of space research. Stumbling out of this secret program, he learns about a renowned scientist (Joshua Lederberg) calling loudly for care in space contamination.

Sagan perhaps learns, upon further interactions, that Lederberg came to this fear after considering the idea that our enemies would detonate a nuclear bomb on the moon as a political show.

Why, yes, Sagan thinks. What if someone were foolish enough to detonate a nuclear bomb on the moon? What absolute madmen would do that? Imagine that. Well, it would be terrible for space research. Let’s try and stop anybody from ever doing that that.

A panel from Homestuck of Dave blasting off into space on a jetpack, with Carl Sagan's face imposed over it. Captioned "THIS IS STUPID"
Artist’s rendition. || Apologies to, inexplicably, both Homestuck and Carl Sagan.

And if it helps, he made it! Over fifty years later and nobody thinks about nuking the moon very often anymore. Good job, Sagan.

This is just speculation. But I think it’s plausible.

If you like my work and want to help me out, consider checking out my Patreon! Thanks.

References

* We have, like, the personal website of a USSR rocket scientist – reference 3 below – which is pretty good.

But then we also have an interview that might have been done by journalist Adam Tanner with Russian rocket scientist Boris Chertok, and published by Reuters in 1999. I found this on an archived page from the Independent Online, a paper that syndicated with Reuters, where it was uploaded in 2012. I emailed Reuters and they did not have the interview in their archives, but they did have a photograph taken of Chertok from that day, so I’m wondering if they published the article but simply didn’t properly archive it later, and if the Independent Online is the syndicated publication that digitized this piece. (And then later deleted it, since only the Internet Archived copy exists now.) I sent a message to who I believe is the same Adam Tanner who would have done this interviewee, but haven’t gotten a response. If you have any way of verifying this piece, please reach out.

1: Associated Press, as found in the LA Times Archive, “U.S. Weighed A-Blast on Moon in 1950s.” 2008 May 18. https://www.latimes.com/archives/la-xpm-2000-may-18-mn-31395-story.html

2. Project A119, “A Study of Lunar Research Flights”, 1959 June 15. Declassified report: https://archive.org/details/DTIC_AD0425380

This is an extraordinary piece to read. I don’t think I’ve ever read a report where a scientist so earnestly explores a proposal and tries to solve various technical questions around it, and clearly does not want the proposal to go forward. For instance:

It is not certain how much seismic energy will be coupled into the
moon by an explosion near its surface,
hence one may develop an argument
that a large explosion would help ensure success of a first seismic experiment. On the other hand, if one wished to proceed at a more leisurely pace, seismographs could be emplaced upon the moon and the nature of possible interferences determined before selection of the explosive device. Such a course would appear to be the obvious one to pursue from a purely scientific viewpoint.

3. Aleksandr Zheleznyakov, translated by Sven Grahm, updated 1999 or so. “The E-4 project – exploding a nuclear bomb on the Moon.” http://www.svengrahn.pp.se/histind/E3/E3orig.htm

Crossposted to LessWrong.

Internet Harvest (2024, 1)

Internet Harvest is a selection of the most succulent links on the internet that I’ve recently plucked from its fruitful boughs. Feel free to discuss the links in the comments.

Biosecurity

US COVID and flu website + hotline for getting prescribed paxlovid, for free, for anyone with a positive COVID test risk factors.

Register now to access free virtual care and treatment for COVID-19 and Flu, 24 hours a day, 7 days a week. Sign up anytime, whether you are sick or not.

https://www.test2treat.org/

I unfortunately had cause to use this recently, and I was struck by how easy it was – as well as the fact that I did not have to talk to anyone via phone or video call.

(It was an option, and they indicated at a couple points that a medical professional might call me if they had questions, so you should be prepared, but in my case they didn’t.) The whole thing including getting the prescription was handled over text digitally. This is fantastic.

First fatal case of alaskapox, a novel orthopox virus, in an immunocompromised patient. Orthopox is the virus group that includes smallpox, monkeypox, and cowpox. Orthopox was discovered in 2015 and seems to be spread by rodents. There have been seven total human cases so far.

The University of Minnesota Center for Infectious Disease Research and Policy (CIDRAP) has opened a Chronic Wasting Disease (CWD) Contingency Planning Project – a group of experts that are planning for the possibility of CWD spilling over into people.

I think this is a deeply important kind of project to be doing. You see this in some places in some fashion – for instance, there’s a lot of effort and money spent understanding and tracking and controlling avian influenza (a strain which has a high mortality in humans but isn’t infectious between humans, just birds – for now.) But often, this kind of proactive pandemic prevention work isn’t done, even when the evidence is there.

(I wrote about the possibility of chronic wasting disease spilling into humans a few months ago. I ended up supposing that it was possible, but looking at the infection risk posed by another spilled-over prion disease from a more common animal, BSE, it seems like the absolute risk from prion diseases that can infect humans is extremely low. I don’t think people on this project would necessarily disagree with that, but there are a lot of unknowns and plenty of reasons to take even a low risk of a highly lethal disease spillover very cautiously. Still, I’ll have to read up on it, there may well be a higher risk than I assumed.)

Related, first noticed cases of Alzheimer’s disease transmitted between people (in patients injected with human-derived human growth hormone, decades later). In a past prion post, I wrote: “Meanwhile, Alzheimer’s disease might be slightly infectious- if you take brain extracts from people who died of Alzheimer’s, and inject them into monkey’s brains, the monkeys develop spongy brain tissue that suggests that the prions are replicating. This technically suggests that the Alzheimer’s amyloids are infectious, even if that would never happen in nature.” Well, it didn’t happen naturally, but I guess it did happen. (h/t Scott at Astral Codex Ten.)

The design history of the biohazard symbol.

Other biology

“Obelisks” are potentially a completely new kind of tiny microorganism, identified from metagenomic RNA sequencing.

One of the best stories in a scientific paper is Ants trapped for years in an old bunker; survival by cannibalism and eventual escape” by Rutkowski et al, 2016.

First of all: the discovery of ants falling into a pipe that lead to a sealed bunker in Poland. Once inside, the ants couldn’t climb back out. There were no plants or other life in the bunker, so the ants survived on other organisms that that fell into the bunker, including eating their own dead (which they wouldn’t normally do, but if you’re in a tight spot, like an unused former nuclear weapons bunker, calories is calories.)

Second: after studying how they lived, the scientists tried transporting a small group of Bunker Ants to the surface to make sure they wouldn’t immediately behave in some kind of abnormal destructive way toward surface ants.

Then, when they didn’t, the scientists – in what I see as a breathtaking act of compassion – installed a plank into the pipe, so that the Bunker Ants could climb out of the bunker and be on the surface again.

Flash photo taken in a small groaty bunker room. In the middle is two new planks nailed together to make a bridge extending from the dirty floor of the bunker to a hole in the ceiling.
God shows up and apologizes for not noticing us sooner and says she’ll have the angels install one of these in the sky. [Image: Rutkowski et al 2019]

NEW DEEP SEA ANIMALS LOCATED. Listen. I’ve written about this before – I know so much about the weird little women of the deep ocean and still every time I learn some more there are NEW STRANGER WOMEN DOWN THERE. This is also true of all of humanity learning about the deep ocean I guess. You simply cannot have a beat on this place.

Other bad things

A Hong Kong finance worker joined a multi-person video call where all his colleagues’ videos and their voices were deepfakes. It was a scam and the worker was tricked into transferring them millions of dollars. So that’s a thing that can happen now!

Bellingcat’s investigation into a tugboat spilling oil off the coast of Tobago. (The first piece is linked, there have been more updates since.) I love Bellingcat; I’ve talked about this before. My reaction to keeping up with this series is equal parts “those wizards have done it again” and “there be some specific-ass websites in this world.”

…But when there’s not detailed public-facing information about something, you can make your own, as shown by volunteers tracking ICE deportations by setting up CCTV facing Boeing Field in Seattle and showing up weekly to watch the feed and count how many chained detainees are boarded onto planes. This is laudable dedication.

Do you have an off-brand video doorbell? Get rid of it! They’re incredibly insecure. They aren’t even encrypted. If you have an on-brand video doorbell, maybe still get rid of it, or at least switch it to using local storage. At least, Ring has been ending the thing where they make it easy for police to get footage without warrants, but there are other brands that might have different systems and if you ask me it’s pretty bad that they had that in the first place. (H/t Schneier on Security)

You know who is giving police sensitive customer personal information without warrants? Pharmacies! Yikes! (Again, h/t Schneier on Security)

Other interesting things

Mohists as early effective altruists? Ozy at Thing of Things writes about the Mohist philosophy of ancient China. I knew a little bit about this, but learning more it’s even cooler than I thought, and the parallels to modern rationality are surprising.

MyHouse.WAD is a fan-created map for the video game “Doom” that turns promises to be a map of a childhood home and turns into an evocative horror experience. I don’t know anything about Doom, so I’ve only experienced it in the form of youtube videos about the (real! playable!) map. You also don’t need to know anything about Doom to appreciate it. Power Pak’s video “MyHouse.WAD – Inside Doom’s Most Terrifying Mod” is the most popular video and for good reason.

If you liked that, you may also enjoy Spazmatic Banana’s “doom nerd blindly experiences myhouse.wad (and loses his mind)” (exactly what it sounds like) as well as DavidXNewton’s “The Machniations of myhouse.wad (How it works)” series (which, as it sounds like, explains how the map works. Again, well explained if you do not know a thing about Doom modding.)

The earliest ARG began in the 1980s at the very beginning of the internet age and is based around a supposed research project that made a dimensional rift in the (real) ghost town of Ong’s Hat, New Jersey.

The world’s largest terrestrial vehicle is the Bagger 293, an otherworldly-looking machine that scrapes up topsoil for digging mines.

A colossal bucket-wheel excavator device. It looks kind of like a big shipping crane with a circular sawblade made of excavator buckets all frankensteined together. Some tiny people indicate the scale.
There’s debate to be had on the virtues or lack thereof of open pit mining, but I think we can all agree: they made a really big machine about it.

A cool piece on the woman who won the “Red Lantern” award for coming in last in the 2022 Iditarod. (H/t Briar.)

Hilariously, she wrote on Twitter:

A short story: The Mother of All Squid Builds a Library. (H/t Ozy.)

Kelsey Piper’s piece on regulations and why it’s good that the FAA lets parents on airplanes carry babies in their lap, even though this is known to be less safe in the event of plane accidents than requiring babies to have their own seats.

A 1500s illustration of three Aztec people with fancy food dishes in front of them.

Book review: Cuisine and Empire

[Header: Illustration of meal in 1500s Mexico from the Florentine Codex.]

People began cooking our food maybe two million years ago and have not stopped since. Cooking is almost a cultural universal. Bits of raw fruit or leaves or flesh are a rare occasional treat or garnish – we prefer our meals transformed. There are other millennia-old procedures we do to make raw ingredients into cooking: separating parts, drying, soaking, slicing, grinding, freezing, fermenting. We do all of this for good reason: Cooking makes food more calorically efficient and less dangerous. Other techniques contribute to this, or help preserve food over time. Also, it tastes good.

Cuisine and Empire by Rachel Laudan is an overview of human history by major cuisines – the kind of things people cooked and ate. It is not trying to be a history of cultures, agriculture, or nutrition, although it touches on all of these things incidentally, as well as some histories of things you might not expect, like identity and technology and philosophy.

Grains (plant seeds) and roots were the staples of most cuisines. They’re relatively calorically dense, storeable, and grow within a season.

  • Remote islands really had to make do with whatever early colonists brought with them. Not only did pre-Columbian Hawaii not have metal, they didn’t have clay to make pots with! They cooked stuff in pits.

Running in the background throughout a lot of this is the clock of domestication – with enough time and enough breeding you can make some really naturally-digestible varieties out of something you’d initially have to process to within an inch of its life. It takes time, quantity, and ideally knowledge and the ability to experiment with different strains to get better breeds.

Potatoes came out of the Andes and were eaten alongside quinoa. Early potato cuisines didn’t seem to eat a lot of whole or cut-up potatoes – they processed the shit out of them, chopping, drying or freeze-drying them, soaking them, reconstituting them. They had to do a lot of these because the potatoes weren’t as consumer-friendly as modern breeds – less digestible composition, more phytotoxins, etc.

As cities and societies caught on, so did wealth. Wealthy people all around the world started making “high cuisines” of highly-processed, calorically dense, tasty, rare, and fancifully prepared ingredients. Meat and oil and sweeteners and spices and alcohol and sauces. Palace cooks came together and developed elaborate philosophical and nutritional theories to declare what was good to eat.

Things people nigh-universally like to eat:

  • Salt
  • Fat
  • Sugar
  • Starch
  • Sauces
  • Finely-ground or processed things
  • A variety of flavors, textures, options, etc
  • Meat
  • Drugs
    • Alcohol
    • Stimulants (chocolate, caffeine, tea, etc)
  • Things they believe are healthy
  • Things they believe are high-class
  • Pure or uncontaminated things (both morally and from, like, lead)

All people like these things, and low cuisines were not devoid of joy, but these properties showed up way more in high cuisines than low cuisines. Low cuisines tended to be a lot of grain or tubers and bits of whatever cooked or pickled vegetables or meat (often wild-caught, like fish or game) could be scrounged up.

In the classic way that oppressive social structures become self-reinforcing, rich people generally thought that rich people were better-off eating this kind of diet – carefully balanced – whereas it wasn’t just necessary, it was good for the poor to eat meager, boring foods. They were physically built for that. Eating a wealthy diet would harm them.

In lots of early civilizations, food and sacrifice of food was an important part of religion. Gods were attracted by offered meals or meat and good smells, and blessed harvests. There were gods of bread and corn and rice.

One thing I appreciate about this book is that it doesn’t just care about the intricate high cuisines, even if they were doing the most cooking, the most philosophizing about cooking, and the most recordkeeping. Laudan does her best to pay at least as much attention to what the 90+% of regular people were eating all of the time.


Here’s a great passage on feasts in Ancient Greece, at the Temple of Zeus in Olympia, at the start of each Olympic games (~400 BCE):

On the altar, ash from years of sacrifice, held together with water from the nearby River Alpheus, towered twenty feet into the air. One by one, a hundred oxen, draped with garlands, raised especially for the event and without marks of the plow, were led to the altar. The priest washed his hands in clear water in special metal vessels, poured out libations of wide, and sprinkled the animals with cold water or with grain to make them shake their heads as if consenting to their death. The onlookers raised their right arms to the altar. Than the priest stunned the lead ox with a blow to the base to the neck, thrust in the knife, and let the blood spill into a bowl held by a second priest. The killing would have gone on all day, even if each act took only five minutes.

Assistants dragged each felled ox to one side to be skinned and butchered. For the assembled crowd, cooks began grilling strips of beef, boiling bones in cauldron, baking barley bannocks, and stacking up amphorae of wine. For the sacrifice, fat and leg and thigh bones rich in life-giving marrow were thrown on a fire of fragrant poplar branches, and the entrails were grilled. Symbolizing union, two or three priests bit together into each length of intestines. The bones whitened and crumbled; the fragrant smoke rose to the god.

Ancient Greek farmers had thin soil and couldn’t do much in the way of deliberate irrigation, so their food supply was more unpredictable than other places.

Country people kept a three-year supply of grain to protect against harvest failure and a four-year supply of oil. 

That’s so much!

That poor soil is also why the olive tree was relied on for oil instead of grains, which had better yields and took way less time to reach producing age. You could grow olive trees in places you couldn’t farm grain. And now we all know and love the oil from this tree. A tree is a wild place to get oil from! Similar story for grapevines.

  • The Spartans really liked this specific pork and blood soup called “black broth”.

This book was a fun read, on top of the cool history. Laudan has a straightforward listful way of describing cuisines that really puts me in the mind of a Redwall or a George R. R. Martin feast description.

A royal meal in the Indian Mauryan Empire (circa 300 BCE or so):

For court meals, the meat was tempered with spices and condiments to correct its hot, dry nature and accompanied by the sauces of high cuisine. Buffalo calf spit-roasted over charcoal and basted with ghee was served with sour tamarind and pomegranate sauces. Haunch of venison was simmered with sour mango and pungent and aromatic spices. Buffalo calf steaks were fried in ghee and seasoned with sour fruit, rock salt, and fragrant leaves. Meat was ground, formed into patties, balls, or sausage shapes, and fried, or it was sliced, dried to jerky, and then toasted.

Or in around 600 CE, Mexican Teotihuacan eating:

To maize tamales or tortillas were added stews of domestic turkeys and dogs, and deer, rabbits, ducks, small birds, iguanas, fish, frog, and insects caught in the wild. Sauces were made with basalt pestles and mortars that were used to shear fresh green or dried and rehydrated red chiles, resulting in a vegetable puree that was thickened with tomatillos (Physalis philadelphica) or squash seeds. Beans, simply simmered in water, provided a tasty side dish. For the nobles, there were gourd bowls of foaming chocolate, seasoned with annatto and chili.

I’m a vegetarian who has no palette for spice and now all I can think about is eating dog stew made with sheared fresh green chiles and plain beans.

Be careful about reading this book while broke on an airplane. You will try to convince yourself this is all academic and that you’re not that curious about what iguana meat tastes like. You’ll lose that internal battle. Then, in desperation, your brain will start in on a new phase. You’ll tell yourself as you scrape the last of your bag of traveler’s food – walnut meat, dried grapes, and pieces of sweet chocolate – that you wait to be brought a complimentary snack of baked wheat crackers flavored with salt, and a cup of hot coffee with cow’s milk, sweetened with cane sugar, and also that this is happening while you are flying. In this moment, you will be enlightened.


Grindstones are very important throughout history. A lot of cultures used hand grindstones at first and worked into water or animal-driven mills later. You grind grain to get flour, but you also grind things to get oil, spices, a different consistency of root, etc. You spent a lot of time grinding grain. There are a million kinds of hand grindstone. Some are still used today. When Roman soldiers marched around continents, they brought with them a relatively efficient rotary grindstone. They used mules to carry one 60-pound grindstone per 8 people. Every day, a soldier would grind for an hour and a half to feed the eight people. The grain would be stolen from storehouses conquered along the way.


Chapter 3 on Buddhist cuisines throughout Asia was especially great. Buddhism spread as sort of a reaction to the high sacrificial meat-n-grain cuisine of the time – a religious asceticism that really caught on. Ashoka spread it in India in 250 BCE, and slowly over centuries seeped into China. Buddhists did not kill animals (mostly) nor drink alcohol, and ate a lot of rice. White rice, sugar, and dairy spread through Asia. In both China and India, as the rich got into it, Buddhism became its own new high cuisine: rare vegetables, sugar, ghee and other dairy, tea, and elaborate vegetarian dishes. So much for asceticism!

There is an extensive history of East Asian tofu and gluten-based meat substitutes that largely came out of vegetarian Buddhist influence. A couple 1100s and 1200s CE Chinese cookbooks are purely vegetarian and have recipes for things like mock lung (you know, like a mock hamburger or mock chicken, but if you’re missing the taste of lung.) (You might be interested in modern adaptations from Robban Toleno.)

Diets often go with religion. It’s a classic way to divide culture, and also, food and philosophy and ideas about health have always gone hand in hand in hand. Islamic empires spread cuisine over the middle east. Christian empires brought their own food with them to other parts of the world.

A lot of early cuisines in Europe, the Middle East, India, Asia, and Mesoamerica were based on correspondences between types of food and elements and metaphysical ideas. You would try to reach balance. In Europe in the 1500s, during the Enlightenment, these old incorrect ideas about nutrition were replaced with bold new incorrect ideas about nutrition. Instead of corresponding to four elements, food was actually made of three chemical elements: salt, oil, and vapor. The Dutch visionary Paracelsus who thought chemistry could be based on the bible and was a century later called a “master at murdering folk with chemistry”.

Fermenting took on its own magic:

Paracelsus suggested that “ferment” was spiritual, reinterpreting the links between the divine and bread in terms of his Protestant chemistry. When ferment combined with matter (massa in Latin, significantly also the word for bread dough), it multiplied. If this seems abstract, considered what happened in bread making. Bakers used a ferment or leaven[…] and kneaded it with flour and water. A few hours later, the risen dough was full of bubbles, or spirit. Ferment, close to the soul itself, turned lifeless stuff into vibrant, living bodies filled with spirit. The supreme example of ferment was Christ, described by the chemical physicians as fermentum, “the food of the soul.”

Again, cannot stress enough that the details of this food cosmology still got most things wrong. But I think they weren’t far off with this one.

There was an article I had bookmarked years ago about the very early days of microbiology and how many people interpreted this idea of tiny animalcules found in sexual fluid and sperm as literal demons. Does anyone know about this? I feel like these dovetail very nicely in a history of microbiological theology.


Corn really caught on in the 1800s as a food for the poor in East and Central Africa, Italy, Japan, India, and China. I don’t really know how this happened. I assume it grew better in some climates than native grains, like potatoes did in Europe?

Corn cuisine in the Americas knew to treat the corn with lye to release more of its nutrients, kill toxins, and make it taste better. This is called nixtamalization. When corn spread to Eurasia, it was grown widely, but nixtamalization didn’t make it over. The Eurasian eaters had to get those nutrients from elsewhere. They still ate corn, but it was a worse time!

  • In Iceland, where no crops would grow, people would use dried fish called “stockfish” and spread sheep butter on it and eat it instead of bread.

Caloric efficiency was a fun recurring theme. See again, the slow adoption of the potato into Europe. Cuisine has never been about maximizing efficiency. Once bare survival is assured, people want to eat what they know and what has high status in their minds.

I think this is a statement about the feedback cycles of individual people, for instance, subsistence farmers. Suppose you’re a Polish peasant in 1700 and you struggle by year by year growing wheat and rye. But this year you have access to potatoes, a food you somewhat mistrust. You might trust it enough to eat a cooked potato handed to you if you were starving – but when you make decisions about what to plant for a year, you will be reluctant to commit to you and your family to a diet of a possibly-poisonous food (or to a failed crop – you don’t know growing potatoes either). Even if it’s looking like a dry year – especially if it’s looking like a dry year! – you know wheat and rye. You trust wheat and rye. You’ve made it through a lean year of wheat and rye before. You’ll do it again.

People are reluctant to give up their staple crops but they will supplant them. Barley was solidly replaced by the somewhat-more-efficient wheat throughout Europe, millet by rice and wheat in China. But we settled on some ones we like: 

The staples that humans had picked out centuries before 1000 B.C.E. still provide most of the world’s human food calories. Only sugarcane, in the form of sugar, was to join them as a major food source.

Around 1650 in Europe, protestant-derived French cuisine overtook high Catholic cuisine as the main food of the European aristocracy.

Catholic cuisineFrench cuisine
Roasts
Fancy pies
Pottage
Cold foods are bad for you
Fasting dishes
Lard
Pastry
Fancy sauces
Boullions and extracts
Raw salads
Focus on vegetables
Butter

Slowly coming up in more recent times, say the 1700s, was a very slow equalizing in society: 

As more nations followed the Dutch and British in locating the source of rulers’ legitimacy not in hereditary or divine rights but in some form of consent or expression of the will of the people, it became increasingly difficult to deny to all citizens the right to eat the same kind of food.

After the French revolution, high French cuisine was almost canceled in France. Everyone should eat as equals, even if the food was potatoes! Fortunately Unfortunately As it happened, Napoleon came in after not too long and imperial high cuisine was back on a very small number of menus.

Speaking of potatoes and self-governance:

The only place where potatoes were adopted with enthusiasm was in distant [from Europe] New Zealand. The Maoris, accustomed to the subtropical roots that they had introduced to the North Island, welcomed them when introduced by Europeans in the 1770s because they grew in the colder South Island. Trading potatoes for muskets with European whalers and sealers enabled the Maoris to resist the British army from the 1840s to the 1870s.

Meanwhile, in Europe: Hey, we’re back to meat and grain! Britain really prized itself on beef and attributed the strength of its empire to beef. Even colonized peoples were like “whoa, maybe that beef and bread they’re eating really is making them that strong, we should try that.” Here’s a 1900 ad for beef extract that aged poorly:

[Source of this version. The brand of beef extract is spelled out of British colonies.]

That said, I did enjoy Laudan’s defense of British food. Starting in 1800, the British Empire was well underway, and what we now think of as stereotypical British cuisine was developing. It was heavy in sugar and sweets, white bread, beef, and prepared food. During the early industrial revolution, food and nutrition and the standard of living went down, but by the 1850s, all of it really came back.

It is worth noting that few cuisines have been so roundly condemned as nutritional and gastronomical disasters as British cuisine.

But Laudan points out that this food was not the aristocrat food (they were still eating French cuisine). It was the food of the working city poor. This is the rise of the “middling cuisines”, a true alternative between fancy high cuisine of a truly tiny percent of society and humble cuisine of peasants who often faced starvation. For once, they had enough to eat. This was new.

After discussing the various ways in which the diet may have been bland or unappealing compared to neighboring cuisines –

Nonetheless, from the perspective of the urban salaried and working classes, the cuisine was just what they had wished for over the centuries: white bread, white sugar, meat, and tea. A century earlier, not only were these luxuries for much of the British population, but the humble were being encouraged to depend on potatoes, not bread, a real comedown in a society in which some kind of bread, albeit a coarse one, had been central to well-being for centuries. Now all could enjoy foodstuffs that had been the privilege of the aristocracy just a few generations earlier. Indeed, the meal called tea came close to being a true national cuisine. Even though tea retained traces of class distinctions, with snobberies about how teacups should be held, or whether milk or tea should be put into the cup first, everyone in the country, from the royal family, who were painted taking tea, to the family of a textile worker in the industrial north of the country, could sit down to white bread sandwiches or toast, jam, small cakes, and an iced sponge cake as a centerpiece. They could afford the tea that accompanied the meal. Set out on the table, tea echoed the grand buffets of eighteenth-century French high cuisine. [...] What seemed like culinary decline to those Britons who had always dined on high or bourgeois cuisine was a vast improvement to those enjoying those ampler and more varied cuisines for the first time.

[...]

Although to this day food continues to be used to reinforce minor differences in status, the hierarchical culinary philosophy of ancient and traditional cuisines was giving way to the more egalitarian culinary philosophy of modern cuisines.

A lot of this was facilitated by imperialism and/or outright slavery. The tea itself, for instance. But Britain was also deeply industrialized. Increased crop productivity, urbanization, and industrial processing were also making Britain’s home-grown food – wheat, meat – cheaper too. Or bringing these processes home. At the start of this period, sugar had been grown and harvested by slaves to feed Europe’s appetites, but in 1800, Prussian inventors figured out how to make sugar at scale from beets. 

The work was done by men paid salaries or wages, not by slaves or indentured laborers. The sugar was produced in northern Europe, not in tropical colonies. And the price was one all Europeans could afford. 

This was the sugar the British were eating then. Industrialization offered factory production of foods, canning, wildly cheap salt, and refrigeration.

We’re reaching the modern age, where the empires have shrunk and most people get enough calories and have access to industrially-cheap food and the fruits of global trade. Laudan discusses at length the hamburger and instant ramen – wheat flour, fat, meat or meat flavor, low price, and convenience. New theories of nutrition developed and we definitely got them right this time. The empires break up and worldwide leaders take pride in local cuisines, manufacturing a sense of identity through food if needed. Most people have the option of some dietary diversity and a middling cuisine. Go back to that list of things people like to eat. Most of us have that now! Nice!

  • Nigeria is the biggest importer of Norwegian stockfish. It caught on as a relief food delivered during Nigeria’s Biafran civil war. Here’s a 1960s photo of a Nigerian guy posing in a Bergen stockfish warehouse.

Aw, wait, is this a book review? Book review: Great stuff. There’s a lot of fascinating stuff not included in this summary. I wish it had more on Africa but I did like all the stuff about Eurasia that was in there. I feel like there are a few cultures with really really meat heavy cuisines – like Saami or Inuit cuisine – that could have been at least touched on. But also those aren’t like major cuisines and I can just learn about those on my own. Overall I appreciated the unwavering sense of compassion and evenhandedness – discussing cuisines and falsified theories of nutrition without casting judgment. Everyone’s just trying to eat dinner.

Rachel Laudan also has a blog. It looks really cool.

Cuisine and Empire by Rachel Laudan

The book is “Cuisine and Empire” by Rachel Laudan, 2012. h/t my friend A for the recommendation.


More food history from Eukaryote Writes Blog: Triptych in Global Agriculture.

If you want to support my work by chucking me a few bucks per post, check out my Patreon!

Defending against hypothetical moon life during Apollo 11

[Header image: Photo of the lunar lander taken during Apollo 11.]

In 1969, after successfully bringing men back from landing on the moon, the astronauts, spacecraft, and all the samples from the moon surface were quarantined for 21 days. This was to account for the possibility that they were carrying hostile moon germs. Once the quarantine was up and the astronauts were not sick, and extensive biological testing on them and the samples showed no signs of infection or unexpected life, the astronauts were released.

We know now that the moon is sterile. We didn’t always know this. That was one of the things we hoped to find out from the Apollo 11 program, which was the first time not only that people would visit another celestial body, but that material from another celestial body would be brought back in a relatively pristine fashion to earth. The possibilities were huge.

The possibilities included life, although nobody thought this was especially likely. But in that slim chance of life, there was a chance that life would be harmful to humans or the earth environment. Human history is full of organisms wrecking havoc when introduced to a new location – smallpox in the Americas, rats in Pacific Islands, water hyacinth outside of South America. What if there were microbes on the moon? Even if there was a tiny chance, wouldn’t it be worth taking careful measures to avoid the risk of an unknown and irreversible change to the biosphere?

NASA, Congress, and various other federal agencies were apparently convinced to spend millions of dollars building an extensive new facility and take extensive other measures to address this possibility.

This is how a completely abstract argument about alien germs was taken seriously and mitigated at great effort and expense during the 1969 Apollo landing.

Continue reading

Will the growing deer prion epidemic spread to humans? Why not?

Helpful background reading: What’s the deal with prions?

A novel lethal infectious neurological disease emerged in American deer a few decades ago. Since then, it’s spread rapidly across the continent. In areas where the disease is found, it can be very common in the deer there.

Map from the Cornell Wildlife Health Lab.

Chronic wasting disease isn’t caused by a bacteria, virus, protist, or worm – it’s a prion, which is a little misshapen version of a protein that occurs naturally in the nervous systems of deer.

Chemically, the prion is made of exactly the same stuff as its regular counterpart – it’s a string of the same amino acids in the same order, just shaped a little differently. Both the prion and its regular version (PrP) are monomers, single units that naturally stack on top of each other or very similar proteins. The prion’s trick is that as other PrP moves to stack atop it, the prion reshapes them – just a little – so that they also become prions. These chains of prions are quite stable, and, over time, they form long, persistent clusters in the tissue of their victims.

We know of only a few prion diseases in humans. They’re caused by random chance misfolds, a genetic predisposition for PrP to misfold into a prion, accidental cross-contamination via medical supplies, or, rarely, from the consumption of prion-infected meat. Every known animal prions is a misfold of the same specific protein, PrP. PrP is expressed in the nervous system, particularly in the brain – so infections cause neurological symptoms and physical changes to the structure of the brain. Prion diseases are slow to develop (up to decades), incurable, and always fatal.

There are two known infectious prion diseases in people. One is kuru, which caused an epidemic among tribes who practiced funerary cannibalism in Papua New Guinea. The other is mad cow disease, also known as bovine spongiform encephalopathy (BSE) AKA Variant Creutzfeldt-Jakob disease, which was first seen in humans in 1996 in the UK, and comes from cows.

Chronic wasting disease (CWD)…

  • Is, like every other animal prion disease, a misfold of PrP. PrP is quite similar in both humans and deer.
  • Is found in multiple deer species which are commonly eaten by humans.
  • Can be carried in deer asymptomatically.

But it doesn’t seem to infect people. Is it ever going to? If a newly-emerged virus were sweeping across the US and killing deer, which could be spread through consuming infected meat, I would think “oh NO.” I’d need to see very good evidence to stop sounding the alarm.

Now, the fact that it’s been a few decades, and it hasn’t spread to humans yet, is definitely some kind of evidence about safety. But are we humans basically safe from it, or are we living on borrowed time? If you live in an area where CWD has been detected, should you eat the deer?

Sidenote: Usually, you’ll see “BSE” used for the disease in cows and “VCJ” for the disease in humans. But they’re caused by the same agent and this essay is operating under a zoonotic One Health kind of stance, so I’m just calling the disease BSE here. (As well as the prion that causes it, when I can get away with it.)

In short

The current version of CWD is not infectious to people. We checked. BSE showed that prions can spill over, and there’s no reason a new CWD variant will never do the same. The more cases there are, the more likely it is to spill over. That said, BSE did not spill over very effectively. It was always incredibly rare in humans. It’s an awful disease to get, but the chance of getting it is tiny. Prions in general have a harder time spilling over between species than viruses do. CWD might behave somewhat differently but probably will stay hampered by the species barrier.

Why do I think all of this? Keep reading.

North American elk (wapiti), which can carry CWD. This and the image at the top of the article are adapted from a photo from the Idaho Fish and Game department, under a CC BY 2.0 license.

Prions aren’t viruses

I said before that if a fatal neurological virus were infecting deer across the US, and showed up in cooked infected meat, my default assumption would be “we’re in danger.” But a prion isn’t a virus. Why does that matter?

Let’s look at how they replicate. A virus is a little bit of genetic material in a protein coating. You, a human, are a lot of genetic in a protein coating. When a virus replicates, it slips into your cells, and it hijacks your replication machinery to run its genes instead. Instead of all the useful-to-you tasks your genome has planned, the virus’s genome outlines its own replication, assembles a bunch more viruses, and blows up the factory (cell) to turn them loose into the world.

In other words, the virus using a robust information-handling system that both you and it have in common – the DNA → RNA → protein pipeline often called “the central dogma” of biology. To a first approximation, you can just add any genetic information at all into the viral genome, and as long as it doesn’t interfere with the virus’s process, whatever you add will get replicated in there too.

Prions do not work like this. They don’t tap into the central dogma. What makes them so fundamentally cool is that they replicate without touching the replication machinery that everything else alive uses – their replication is structural, like a snowflake forming. The host provides raw material in the form of PrP, and the prion – once it lands – encourages that material to shape in the right way for more to form atop it.

What this means is that you can’t encode arbitrary information into a prion. This isn’t just a factor – it’s not as though a prion runs on a separate “protein genome” we could decipher and then encode what we like into. The entire structure of the prion has to work together to replicate itself. If you made a prion with some different fold in it, that fold has to not just form a stable protein, but to pass itself along as well. They don’t have a handy DNA replicase enzyme to outsource to – they have to solve the problem of replication themselves, every time.

Prions can evolve, but they do it less – they have fewer free options, they’re more constrained than a virus would be in terms of changes that don’t interrupt the rest of the refolding process and that on top of that promulgate themselves.

This means that prions are slower to evolve than viruses. …I’m pretty sure, at least. It makes a lot of sense to me. The thing that this definitely means is that:

It’s very hard for prions to cross species barriers

PrP is a very conserved protein across mammals, meaning that all mammals have a version of PrP that’s pretty similar – 90%+ similarity.* But the devil lies in that 10%.

Prions are finely tuned – to convert PrP to a prion, it basically needs to be identical, or at least functionally identical, everywhere the prion works. It not just needs to be susceptible to the prion’s misfolding, it also needs to fold into something that itself can replicate. A few amino acid differences can throw a wrench in the works.

It’s clear that infectious prions can have a hard time crossing species barriers. It depends on the strain. For instance: Mouse prions convert hamster PrP.** Hamster prions don’t convert mouse PrP. Usually a prion strain converts its usual host PrP best, but one cat prion more efficiently converts cow PrP. In a test tube, CWD can convert human or cow PrP a little, but shows slightly more action with sheep PrP (and much more with, of course, deer PrP.)

This sounds terribly arbitrary. But remember, prion behavior comes down to shape. Imagine you’re playing with legos and duplo blocks. You can stack legos on legos and duplos on duplos. You can also put a duplo on top of a lego block. But then you can only add duplo blocks on top of that – you’ve permanently changed what can get added to that stack.

When we look at people – or deer, or sheep, etc – who are genetically resistant to prions (more on that later), we find that serious resistance can be conferred by single nucleic acid changes in the PrP gene. Tweak one single letter of DNA in the right place, and their PrP just doesn’t bend into the prion shape easily. If the infection takes, it proceeds slower slow enough a person might die of old age before the prion would kill them.

So if a decent number of members of a species can be resistant to prion diseases, based on as little as one amino acid – then a new species, one that might have dozens of different amino acids in the PrP gene, is unlikely to be fertile ground for an old prion.

* (This is kind of weird given that we don’t know what PrP actually does – the name PrP just stands “prion protein” because it’s the protein that’s associated with prions, and we don’t know its function. We can genetically alter mice so that they don’t produce PrP at all, and they show slight cognitive issues but they’re basically fine. Classic evolution. It’s appendices all over again.)
** Sidebar: When we look at studies for this, we see that like a lot of pathology research, there's a spectrum of experiments on different points on the axis from “deeply unrealistic” to “a pretty reasonable simulacrum of natural infection”, like:

1. Shaking up loose prions and PrP in a petri dish and seeing if the PrP converts

2. Intracranial injection with brain matter (i.e. grinding up a diseased brain and injecting some of that nasty juice into the brain of a healthy animal and seeing if it gets sick)

3. Feeding (or some other natural route of exposure) a plausible natural dose of prions to a healthy animal and seeing if that animal gets sick

The experiments mentioned below are based on 1. Only experiments that do 3 actually prove the disease is naturally infectious. For instance, Alzheimer’s disease is “infectious” if you do 2, but since nobody does that, it’s not actually a contagious threat. That said, doing more-abstracted experiments means you can really zoom in on what makes strain specificity tick. 

But prions do cross species barriers

Probably the best counterargument to everything above is that another prion disease, BSE, did cross the species barrier. This prion pulled off a balancing act: it successfully infected cows and humans at the same time.

Let’s be clear about one big and interesting thing: BSE is not good at crossing the species barrier. When I say this, I mean two things:

First, people did not get it often. While the big UK outbreak was famously terrifying, only around 200 people ever got sick from mad cow disease. Around 200,000 cows tested positive for it. But most cows weren’t tested. Researchers estimate that 2 million cows total in the UK had BSE, most of which were slaughtered and entered the food chain. These days, Britain has 2 million cows at any given time.

At first glance, and to a first approximation, I think everyone living in the UK for a while between 1985 and 1996 or so (who ate beef sometimes) must have eaten beef from an infected animal. That’s approximately who the recently-overturned blood donation ban in the US affected. I had thought that was sort of an average over who was at risk of exposure – but no, that basically encompassed everyone who was exposed. Exposure rarely leads to infection.

You’re more likely to get struck by lightning than to get BSE even if you have eaten BSE-infected beef.

Second, in the rare cases the disease takes, it’s slower. Farm cows live short lives, and the cows that died from BSE would have gotten old for the beef industry at 4-5 years post-exposure. They survived at most weeks or months after symptoms began. Humans infected with BSE, meanwhile, can harbor it for up to decades post-exposure, and live an average of over a year after showing symptoms.

I think both of these are directly attributable to the prion just being less efficient at converting human PrP – versus the PrP of the cows it was adapted to. It doesn’t often catch on in the brain. When it does, it moves extremely slowly.

But it did cross over. And as far as I can tell, there’s no reason CWD can’t do the same. Like viruses, CWD has been observed to evolve as it bounces between hosts with different genotypes. Some variants of CWD seem more capable of converting mouse PrP than the common ones. The good old friend of those who play god, serial passaging, encourages it.

(Note also that all of the above differs from kuru, which did cause a proper epidemic. Kuru spread between humans and was adapted for spreading in humans. When looking to CWD, BSE is a better reference point because it spread between cows and only incidentally jumped to humans – it was never adapted for human spread.)

How is CWD different from BSE?

BSE appears in very low, very low numbers anywhere outside the brains and spines of its victims. CWD is also concentrated in the brains, but also appears in the spines and lymphatic tissue, and to a lesser but still-present degree, everywhere else: muscle, antler velvet, feces, blood, saliva. It’s more systematic than BSE.

Cows are concentrated in farms, and so are some deer, but wild deer carry CWD all hither and yon. As they do, they leave it behind in:

  • Feces – Infected deer shed prions in their feces. An animal that eats an infected deer might also shed prions in its feces.
  • Bodies – Deer aren’t strictly herbivorous if push comes to shove. If a deer dies, another deer might eat the body. One study found that after a population of reindeer started regularly gnawing on each other’s antlers (#JustDeerThings), CWD swept in.
  • Dirt – Prions are resilient and can linger, viable, in soil. Deer eat dirt accidentally while eating grass, as well as on purpose from time to time and can be infected.
  • Grass – Prions in the soil or otherwise deposited onto plant tissue can hang out in living grass for a long time.
  • Ticks – One study found that ticks fed CWD prions don’t degrade the protein. If they’re then eaten by deer (for instance, during grooming), they could spread CWD. This study isn’t perfect evidence; the authors note that they fed the ticks a concentration of prions about 1000x higher than is found in infected deer blood. But if my understanding of statistics and infection dynamics is correct, that suggests that maybe 1 in 1000 ticks feeding on infected deer blood reaches that level of infectivity? Deer have a lot of ticks! Still pretty bad!

That’s a lot of widespread potentially-infectious material.

When CWD is in an area, it can be very common – up to 30% of wild deer, and up to 90% of deer on an infected farm. These deer can carry CWD and have it in their tissues for quite some time asymptomatically – so while it frequently has very visible behavioral and physical symptoms, it also sometimes doesn’t.

In short, there’s a lot of CWD in lots of places through the environment. It’s also spreading very rapidly. If a variant capable of infecting both deer and humans emerged, there would be a lot of chances for possible exposure.

Deer on a New Zealand deer farm. By LBM1948, under a CC BY-SA 4.0 license.

What to do?

As an individual

As with any circumstance at all, COVID or salmonella or just living in a world that is sometimes out to get you, you have to choose what level of risk you’re alright with. At first, writing this piece, I was going to make a suggestion like “definitely avoid eating deer from areas that have CWD just in case your deer is the one that has a human-transmissible prion disease.” I made a little chart about my sense of the relative risk levels, to help put the risk in scale even though it wasn’t quantified. It went like this:

Imagine a spectrum of risk of getting a prion disease. On one end, which we could call "don't do this", is "eating beef from an animal with BSE". Close to that but slightly less risky is "eating deer from an animal with CWD". On the other very safe end is "eating beef from somewhere with known active BSE cases". This entire model is wrong, though.

But, as usual, quantification turns out to be pretty important. I actually did the numbers about how many people ever got sick from BSE (~200) and how many BSE-infected cows were in the food chain (~2,000,000), which made the actual risk clear. So I guess the more prosaic version looks like this:

Remember that spectrum of risk? Well, all of these risks are infinitesimal. Worry about something else! Eating beef from an animal with BSE is still more dangerous than eating deer from an animal with CWD, which is more dangerous than eating beef from somewhere without known active BSE cases - but all of these are clustered very, very far on the safe side of the graph.

…This is sort of a joke, to be clear. There’s not a health agency anywhere on earth that will advise you to eat meat from cows known to have BSE, and the CDC recommends not eating meat from deer that test positive for CWD (though it’s never infected a human before.)

On top of that, the overall threat is still uncertain because what you’re betting on is “the chance that this animal will have had an as-of-yet undetected CWD variant that can infect humans.” There’s inherently no baseline for that!

We don’t know what CWD would act like if it spilled over. It might be more infectious and dangerous than other infectious prion diseases we’ve seen – remember, with humans, the sample size is 2! So if CWD is in your area and it’s not a hardship to avoid eating deer, you might want to steer clear. …But the odds are in your favor.

As a society

There’s not an obvious solution. The epidemic spreading among deer isn’t caused by a political problem, it’s from nature.

The US is doing a lot right: mainly, it is monitoring and tracking the spread of the disease. It’s spreading the word. (If nothing else, you can keep track of this by subscribing to google alerts for “chronic wasting disease”, and then pretty often you’ll get an email saying things like “CWD found in Florida for the first time” or “CWD found an hour from you for the first time.”) It is encouraging people to submit deer heads for testing, and not to eat meat from deer that test positive. The CDC, APHIS, Fish & Wildlife Service, and more are all aware of the problem and participating in tracking it.

What more could be done? Well, a lot of the things that would help a potential spillover of CWD look like actions that can be taken in advance of any threatening novel disease. There is research being done on prions and how they cause disease, better diagnostics, and possible therapeutics. All of these are important. Prion disease diagnosis and treatment is inherently difficult, and on top of that, has little overlap with most kinds of diagnosis or treatment. It’s also such a rare set of diseases that it’s not terribly well studied. (My understanding is that right now there are various kinds of tests for specific prion diseases – which could be adapted for a new prion disease – that are extremely sensitive although not particularly cheap or widespread.)

I don’t know a lot about the regulatory or surveillance situation vis-a-vis deer farms, or for that matter, much about deer farms at all. I do know that they seem to be associated with outbreaks, and heavy disease prevalence once there is an outbreak. That’s a smart area to an eye on.

If CWD did spill over, what would happens?

It will probably also take time to locate cases and identify the culprit, but given the aforementioned awareness and surveillance of the issue, it ought to take way less time than it took to identify the causative agent of BSE. Officials are already paying attention to deaths that could potentially be CWD-related, like neurodegenerative illnesses that kill young people.

First, everyone gets very nervous about eating venison for a while.

After that, I expect the effects will look a lot like the aftermath of mad cow disease. Mad cow disease, and very likely a hypothetical CWD spillover, would not be transmissible between people in usual ways – coughing, skin contact, fomites, whatever.

It is transmissible via unnatural routes, which is to say, blood transfusions. You might remember how people who’d spent over 6 months in Britain couldn’t donate blood in the US until 2022, a direct response to the BSE outbreak. Yes, the disease was extremely rare, but unless you can quickly and cheaply test incoming blood donations, a donor could donate blood to multiple people. Suppose some of them donate blood down the line. You’d have a chain of infection and a disease with a potentially decades-long incubation period. And remember, the disease is incurable and fatal. So basically, the blood donation system (and probably other organ donation) becomes very problematic.

That said, I don’t think it would break down completely. In the BSE case, lots of people in the UK eat beef from time to time – probably most people. But with a deerborne disease, I would guess that a lot of the US population could confidently declare that they haven’t eaten deer within the past, say, year or so (prior to a detected outbreak.) So I think there’d be panic and perhaps strain on the system but not necessarily a complete breakdown. Again, all of this is predicated on a new prion disease working like known human prion diseases.

Genetic resistance

One final fun fact: People who have a certain allele in the PrP gene – specifically, have the genotype PRNP 129M/V or V/V – are incredibly genetically resistant to known infectious prion diseases. If they do get infected, they survive for much longer.

It’s also not clear that this would hold true for a hypothetical CWD crossover to humans. But it is true for both kuru and BSE. It’s also partly (although not totally) protective against sporadic Creutzfeldt-Jakob disease.

If you’ve gotten a service like 23&me, maybe check out your data and see if you’re resistant to infectious prion diseases. Here’s what you’re looking for:

129M/V or V/V (amino acids), or G/G or A/G (nucleotides) – rs1799990

If you instead have M/M (amino acids) or A/A (nucleotides) at that site, you’re SOL at a higher but still very low overall risk.


Final thoughts

  • I think exercises like “if XYZ disease emerges, what will the ramifications and response be” are valuable. They lead to questions like “what problems will seem obvious in retrospect” and “how can we build systems now that will improve outcomes of disasters.” This is an interesting case study and I might revisit it later.

  • Has anyone reading this ever been struck by lightning? That’s the go-to comparison for things being rare. But 1 in 15,000 isn’t, like, unthinkably rare. I’m just curious.

  • No, seriously, what’s the deal with deer farms? I never think about deer farms much. When I think of venison, I imagine someone wearing camo and carrying a rifle out into a national forest or a buddy’s backyard or something. How many deer are harvest from hunting vs. farms? What about in the US vs. worldwide? Does anyone know? Tell me in the comments.

This essay was crossposted to LessWrong. Also linked at the EA Forums.

If you want to encourage my work, check out my Patreon. Today’s my birthday! I sure would appreciate your support.

Also, this eukaryote is job-hunting. If you have or know of a full-time position for a researcher, analyst, and communicator with a Master’s in Biodefense, let me know:

Eukaryote Writes Blog (at) gmail (dot) com

In the mean time, perhaps you have other desires. You’d like a one-off research project, or there’s a burning question you’d love a well-cited answer to. Maybe you want someone to fact-check or punch up your work. Either way, you’d like to buy a few hours of my time. Well, I have hours, and the getting is good. Hit me up! Let’s chat. 🐟

Woodblock print of swimming prawns

Eukaryote in Asterisk Magazine + New Patreon Per-post setup

Eukaryote elsewhere

I have an article in the latest issue of Asterisk Magazine. After you get really deep into the weeds of invertebrate sentience and fish welfare and the scale of factory farming, what do you do with that information vis-a-vis what you feel comfortable eating? Here’s what I’ve landed on and why. Read the piece that Scott Alexander characterized as making me sound more annoying to eat with than I really am.

(Also check out the full piece of delightful accompanying art from Karol Banach.)

Check out the rest of the issue as well. Favorites include:

A new better Patreon has landed

This blog has a Patreon! Again! I’m switching from the old per month payment model to a new pay per post system, since this blog has not been emitting regular monthly updates in quite some time. So if you get excited when you see Eukaryote Writes Blog in your feed, and you want to incentivize more of that kind of thing, try this new and improved system for giving me money.

Here’s the link. Consider a small donation per post. Direct incentives: Lots of people are fans. I’m no effective charity but the consistent revenue does have a concrete and pleasant impact on my life right now, so I do really appreciate it.

It’s important to me that the things I write here are freely available. This will continue to be true! I might think of some short bits of content that will be patron-exclusive down the line, but anything major? Your local eukaryote is here to write a blog, not a subscription service. It’s in the name.

Helpful notes

  • To be clear, the payment will trigger per substantial new post. Updates of content elsewhere, metablogging like this, short corrections, etc, won’t count.
  • You can set a monthly limit in Patreon, even with the per-post model. For the record, I think it’s unlikely I’d put out more than 1-2 posts per month even in the long term future.
  • And of course, you can change your payment or unsubscribe at any old time you please.
Woodblock print of swimming prawns
Excerpt of Horse Mackerel (Aji) with Shrimp or Prawn, by Utagawa Hiroshige, ~1822-23. Public Domain.
An old knit tube with colorful stripes

Who invented knitting? The plot thickens

Last time on Eukaryote Writes Blog: You learned about knitting history.

You thought you were done learning about knitting history? You fool. You buffoon. I wanted to double check some things in the last post and found out that the origins of knitting are even weirder than I guessed.

Humans have been wearing clothes to hide our sinful sinful bodies from each other for maybe about 20,000 years. To make clothes, you need cloth. One way to make cloth is animal skin or membrane, that is, leather. If you want to use it in any complicated or efficient way, you also need some way to sew that – very thin strips of leather, or taking sinew or plant fiber and spinning it into thread. Also popular since very early on is taking that thread, and turning it into cloth. There are a few ways to do this.

A drawing showing loose fiber, which turns into twisted thread, which is arranged in various ways to make different kinds of fabric structures. Depicted are the structures for: naalbound, woven, knit, looped, and twined fabric.
By the way, I’m going to be referring to “thread” and “yarn” interchangeably from here on out. Don’t worry about it.

(Can you just sort of smush the fiber into cloth without making it into thread? Yes. This is called felting. How well it works depends on the material properties of the fiber. A lot of traditional Pacific Island cloth was felted from tree bark.)

Now with all of these, you could probably make some kind of cloth by taking threads and, by hand, shaping them into these different structures. But that sounds exhausting and nobody did that. Let’s get tools involved. These different structures correspond to some different kind of manufacturing technique.

By far, the most popular way of making cloth is weaving. Everyone has been weaving for tens of thousands of years. It’s not quite a cultural universal but it’s damn close. To weave, you need a loom.1 There are ten million kinds of loom. Most primitive looms can make a piece of cloth that is, at most, the size of the loom. So if you want to make a tunic that’s three feet wide and four feet long, you need cloth that’s at least three feet wide and four feet long, and thus, a loom that’s at least three feet wide and four feet long. You can see how weaving was often a stationary affair.

Recap

Here’s what I said in the last post: Knitting is interesting because the manufacturing process is pretty simple, needs simple tools, and is portable. The final result is also warm and stretchy, and can be made in various shapes (not just flat sheets). And yet, it was invented fairly recently in human history.

I mostly stand by what I said in the last post. But since then I’ve found some incredible resources, particularly the scholarly blogs Loopholes by Cary “stringbed” Karp and Nalbound by Anne Marie Deckerson, which have sent me down new rabbit-holes. The Egyptian knit socks I outlined in the last post sure do seem to be the first known knit garments, like, a piece of clothing that is meant to cover your body. They’re certainly the first known ones that take advantage of knitting’s unique properties: of being stretchy, of being manufacturable in arbitrary shapes. The earliest knitting is… weirder.

SCA websites

Quick sidenote – I got into knitting because, in grad school, I decided that in the interests of well-roundedness and my ocular health, I needed hobbies that didn’t involve reading research papers. (You can see how far I got with that). So I did two things: I started playing the autoharp, and I learned how to knit. Then, I was interested in the overlap between nerds and handicrafts, so a friend in the Society for Creative Anachronism pitched me on it and took me to a coronation. I was hooked. The SCA covers “the medieval period”; usually, 1000 CE through 1600 CE.

I first got into the history of knitting because I was checking if knitting counted as a medieval period art form. I was surprised to find that the answer was “yes, but barely.” As I kept looking, a lot of the really good literature and analysis – especially experimental archaeology – came out of blogs of people who were into it as a hobby, or perhaps as a lifestyle that had turned into a job like historical reenactment. This included a lot of people in the SCA, who had gone into these depths before and just wrote down what they found and published it for someone else to find. It’s a really lovely knowledge tradition to find one’s self a part of.

Aren’t you forgetting sprang?

There’s an ancient technique that gets some of the benefits of knitting, which I didn’t get to in the last post. It’s called sprang. Mechanically, it’s kind of like braiding. Like weaving, sprang requires a loom (the size of the cloth it produces) and makes a flat sheet. Like knitting, however, it’s stretchy.

Sprang shows up in lots of places – the oldest in 1400 BCE in Denmark, but also other places in Europe, plus (before colonization!): Egypt, the Middle East, centrals Asia, India, Peru, Wisconsin, and the North American Southwest. Here’s a video where re-enactor Sally Pointer makes a sprang hairnet with iron-age materials.

Despite being widespread, it was never a common way to make cloth – everyone was already weaving. The question of the hour is: Was it used to make socks?

Well, there were probably sprang leggings. Dagmar Drinkler has made historically-inspired sprang leggings, which demonstrate that sprang colorwork creates some of the intricate designs we see painted on Greek statues – like this 480 BCE Persian archer.

I haven’t found any attestations of historical sprang socks. The Sprang Lady has made some, but they’re either tube socks or have separately knitted soles.

Why weren’t there sprang socks? Why didn’t sprang, widespread as it is, take on the niche that knitting took?

I think there are two reasons. One, remember that a sock is a shaped-garment, tube-like, usually with a bend at the heel, and that like weaving, sprang makes a flat sheet. If you want another shape, you have to sew it in. It’s going to lose some stretch where it’s sewn at the seam. It’s just more steps and skills than knitting a sock.

The second reason is warmth. I’ve never done sprang myself – from what I can tell, it has more of a net-like openness upon manufacture, unlike knitting which comes with some depth to it. Even weaving can easily be made pretty dense simply by putting the threads close together. I think, overall, a sprang fabric garment made with primitive materials is going to be less warm than a knit garment made with primitive materials.

Those are my guesses. I bring it up merely to note that there was another thread → cloth technique that made stretchy things that didn’t catch on the same way knitting did. If you’re interested in sprang, I cannot recommend The Sprang Lady’s work highly enough.

Anyway, let’s get back to knitting.

Knitting looms

The whole thing about roman dodecahedrons being (hypothetically) used to knit glove fingers, described in the last post? I don’t think that was actually the intended purpose, for the reasons I described re: knitting wasn’t invented yet. But I will cop to the best argument in its favor, which is that you can knit with glove fingers with a roman dodecahedron.

“But how?” say those of you not deeply familiar with various fiber arts. “That’s not needles,” you say.

You got me there. This is a variant of a knitting loom. A knitting loom is a hoop with pegs to make knit tubes. This can be the basis of a knitting machine, but you can also knit on one on its own.. They make more consistent knit tubes with less required hand-eye coordination. (You can also make flat panels with them, especially a version called a knitting rake, but since all of the early knitting we’re talking about are tubes anyhow, let’s ignore that for the time being.)

Knitting on a modern knitting loom. || Photo from Cynthia M. Parker on flickr, under a CC BY-SA 2.0 license.

Knitting on a loom is also called spool knitting (because you can use a spool with nails in it as the loom for knitting a cord) and tomboy knitting (…okay). Structurally, I think this is also basically the same thing as lucet cord-making, so let’s go ahead and throw that in with this family of techniques. (The earliest lucets are from ~1000 CE Viking Sweden and perhaps medieval Viking Britain.)

The important thing to note is that loom knitting makes a result that is, structurally, knit. It’s difficult to tell whether a given piece is knit with a loom or needles, if you didn’t see it being made. But since it’s a different technique, different aspects become easier or harder.

A knitting loom sounds complicated but isn’t hard to make, is the thing. Once you have nails, you can make one easily by putting them in a wood ring. You could probably carve one from wood with primitive tools. Or forge one. So we have the question: Did knitting needles or knitting looms come first?

We actually have no idea. There aren’t objects that are really clearly knitting needles OR knitting looms until long after the earliest pieces of knitting. This strikes me as a little odd, since wood and especially metal should preserve better than fabric, but it’s what we’ve got. It’s probably not helped by the fact that knitting needles are basically just smooth straight sticks, and it’s hard to say that any smooth straight stick is conclusively a knitting needle (unless you find it with half a sock still on it.)

(At least one author, Isela Phelps, speculates that finger-knitting, which uses the fingers of one hand like a knitting loom and makes a chunky knit ribbon, came first – presumably because, well, it’s easier to start from no tools than to start from a specialized tool. This is possible, although the earliest knit objects are too fine and have too many stitches to have been finger-knit. The creators must have used tools.)

(stringbed also points out that a piece of whale baleen can be used as circular knitting needles, and that the relevant cultures did have access to and trade in whale parts. Although while we have no particular evidence that they were used as such, it does mean that humanity wouldn’t have to invent plastic before inventing the circular knitting needle, we could have had that since the prehistoric period. So, I don’t know, maybe it was whales.)

THE first knitting

The earliest knit objects we have… ugh. It’s not the Egyptian socks. It’s this.

Photo of an old, long, thin knit tube in lots of striped colors.
One of the oldest knit objects. || Photo from Musée du Louvre, AF 6027.

There are a pair of long, thin, colorful knit tubes, about an inch wide, a few feet long. They’re pretty similar to each other. Due to the problems inherent in time passing and the flow of knowledge, we know one of them is probably from Egypt, and was carbon-dated to 425-594 CE. The other quite similar tube, of a similar age, has not been carbon dated but is definitely from Egypt. (The original source text for this second artifact is in German, so I didn’t bother trying to find it, and instead refer to stringbed’s analysis. See also matthewpius guestblogging on Loopholes.) So between the two of them, we have a strong guess that these knit tubes were manufactured in Egypt around 425-594 CE, about 500 years before socks.

People think it was used as a belt.

This is wild to me. Knitting is stretchy, and I did make fun of those peasants in 1300 CE for not having elastic waistlines, so I could see a knitted belt being more comfortable than other kinds of belts.2 But not a lot better. A narrow knit belt isn’t going to be distribute most of the force onto the body too differently than a regular non-stretchy belt, and regular non-stretchy belts were already in great supply – woven, rope, leather, etc. Someone invented a whole new means of cloth manufacture and used it to make a thing that existed slightly differently.

Then, as far as I can tell, there are no knit objects in the known historical record for five hundred years until the Egyptian socks pop up.

Pulling objects out of the past is hard. Especially things made from cloth or animal fibers, which rot (as compared to metal, pottery, rocks, bones, which last so long that in the absence of other evidence, we name ancient cultures based on them.) But every now and then, we can. We’ve found older bodies and textiles preserved in ice and bogs and swamps.3 We have evidence of weaving looms and sewing needles and pictures of people spinning or weaving cloth and descriptions of them doing it, from before and after. I’m guessing that the technology just took a very long time to diversify beyond belts.

Speaking of which: how was the belt made? As mentioned, we don’t find anything until much later that is conclusively a knitting needle or a knitting loom. The belts are also, according to matthewpius on loopholes, made with a structure called double knitting. The effect is (as indicated by Pallia – another historic reenactor blog!) kind of hard to do with knitting needles in the way they achieved it, but pretty simple to do with a knitting loom.

(Another Egyptian knit tube belt from an unclear number of centuries later.)

Viking knitting

You think this is bad? Remember before how I said knitting was a way of manufacturing cloth, but that it was also definable as a specific structure of a thread, that could be made with different methods?

The oldest knit object in Europe might be a cup.

Photo of a richly decorated old silver cup.
The Ardagh Chalice. || Photo by Sailko under a CC BY-SA 3.0 license.

You gotta flip it over.

Another photo of the ornate chalice from the equally ornate bottom. Red arrows point to some intricate wire decorations around the rim.
Underside of the Ardagh Chalice. || Adapted from a Metroplitan Museum image.

Enhance.

Black and white zoom in on the wire decorations. It's more  clearly a knit structure.
Photo from Robert M. Organ’s 1963 article “Examination of the Ardagh Chalice-A Case History”, where they let some people take the cup apart and put it back together after.

That’s right, this decoration on the bottom of the Ardagh Chalice is knit from wire.
Another example is the decoration on the side of the Derrynaflen Paten, a plate made in 700 or 800 CE in Ireland. All the examples seem to be from churches, hidden by or from Vikings. Over the next few hundred years, there are some other objects in this technique. They’re tubes knitted from silver wire. “Wait, can you knit with wire?” Yes. Stringbed points out that knitting wire with needles or a knitting loom would be tough on the valuable silver wire – they could break or distort it.

Photo of an ornate silver plate with gold decorations. There are silver knit wire tubes around the edge.
The Derrynaflen Patten, zoomed in on the knit decorations at the end. || Adapted from this photo by Johnbod, under a CC BY-SA 3.0 license.

What would make sense to do it with is a little hook, like a crochet hook. But that would only work on wire – yarn doesn’t have the structural integrity to be knit with just a hook, you need to support each of the active loops.

So was the knit structure just invented separately by Viking silversmiths, before it spread to anyone else? I think it might have been. It’s just such a long time before we see knit cloth, and we have this other plausible story for how the cloth got there.

(I wondered if there was a connection between the Viking knitting and their sources of silver. Vikings did get their silver from the Islamic world, but as far as I can tell, mostly from Iran, which is pretty far from Egypt and doesn’t have an ancient knitting history – so I can’t find any connection there.)

The Egyptian socks

Let’s go back to those first knit garments (that aren’t belts), the Egyptian knit blue-and-white socks. There are maybe a few dozen of these, now found in museums around the world. They seem to have been pulled out of Egypt (people think Kustat) by various European/American collectors. People think that they were made around 1000-1300 AD. The socks are quite similar: knit, made of cotton, in white and 1-3 shades of indigo, geometric designs sometimes including Kufic characters.

I can’t find a specific origin location (than “probably Egypt, maybe Kustat?”) for any of them. The possible first sock mentioned in the last post is one of these – I don’t know if there are any particular reasons for thinking that sock is older than the others.

This one doesn’t seem to be knit OR naalbound. Anne Marie Decker at Nalbound.com thinks it’s crocheted and that the date is just completely wrong. To me, at least, this cast doubts on all the other dates of similar-looking socks.

That anomalous sock scared me. What if none of them had been carbon-dated? Oh my god, they’re probably all scams and knitting was invented in 1400 and I’m wrong about everything. But I was told in a historical knitting facebook group that at least one had been dated. I found the article, and a friend from a minecraft discord helped me out with an interlibrary loan. I was able to locate the publication where Antoine de Moor, Chris Verhecken-Lammens and Mark Van Strydonck did in fact carbon-date four ancient blue-and-white knit cotton socks and found that they dated back to approximately 1100 CE – with a 95% chance that they were made somewhere between 1062 and 1149 CE. Success!

Helpful research tip: for the few times when the SCA websites fail you, try your facebook groups and your minecraft discords.

Estonian mitten

Photo of a tattered old fragment of knitting. There are some colored designs on it in blue and red.
Yeah, this is all of it. Archeology is HARD. [Image from Anneke Lyffland’s writeup.]

Also, here’s a knit fragment of a mitten found in Estonia. (I don’t have the expertise or the mitten to determine it myself, but Anneke Lyffland (another SCA name), a scholar who studied one is aware of cross-knit-looped naalbinding – like the Peruvian knit-lookalikes mentioned in the last post – and doesn’t believe this was naalbound.) It was part of a burial that was dated from 1238 – 1299 CE. This is fascinating and does suggest a culture of knitted practical objects, in Eastern Europe, in this time period. This is the earliest East European non-sock knit fabric garment that I’m aware of.

But as far as I know, this is just the one mitten. I don’t know much about archaeology in the area and era, and can’t speculate as to whether this is evidence that knitting was rare or whether we have very few wool textiles from the area and it’s not that surprising. (The voice of shoulder-Thomas-Bayes says: Lots of things are evidence! Okay, I can’t speculate as to whether it’s strong evidence, are you happy, Reverend Bayes?) Then again, a bunch of speculation in this post is also based on two maybe-belts, so, oh well. Take this with salt.

By the way, remember when I said crochet was super-duper modern, like invented in the 1700s?

Literally a few days ago, who but the dream team of Cary “stringbed” Karp and Anne Marie Decker published an article in Archaeological Textiles Review identifying several ancient probably-Egyptian socks thought to be naalbound as being actually crocheted.

This comes down to the thing about fabric structures versus techniques. There’s a structure called slip stitch that can be either crocheted or naalbound. So since we know naalbinding is that old, so if you’re looking at an old garment and see slip stitch, maybe you say it was naalbound. But basically no fabric garment is just continuous structure all the way through. How do the edges work? How did it start and stop? Are there any pieces worked differently, like the turning of a heel or a cuff or a border? Those parts might be more clearly worked with crochet hook than a naalbinding needle. And indeed, that’s what Karp and Decker found. This might mean that those pieces are forgeries – no carbon dating. But it might mean crochet is much much older than previously thought.

My hypothesis

Knitting was invented sometime around or perhaps before 600 CE in Egypt.

From Egypt, it spreads to other Muslim regions.

It spread into Europe via one or more of these:

  1. Ordinary cultural diffusion northwards
  2. Islamic influence in the Iberian Peninsula
    • In 711 CE, Al-Andalus was conquered by the Umayyad Caliphate…
      • Kicking off a lot of Islamic presence in and control over the area up until 1400 CE or so…
  3. Meanwhile, starting in 1095 CE, the Latin Church called for armies to take Jerusalem from the Byzantines, kicking off the Crusades.
    • …Peppering Arabic influences into Europe, particularly France, over the next couple centuries.

… Also, the Vikings were there. They separately invented the knitting structure in wire, but never got around to trying it out in cloth, perhaps because the required technique was different.

Another possibility

Wrynne, AKA Baronness Rhiall of Wystandesdon (what did I say about SCA websites?), a woman who knows a thing or two about socks, believes that based on these plus the design of other historical knit socks, the route goes something like:

??? points to Iran, which points to: A. Eastern Europe, then to 1. Norway and Sweeden and 2. Russia. B. to ???, to Spain, to Western Europe.

I don’t know enough about socks to have an sophisticated opinion on her evidence, but the reasoning seems solid to me. For instance, as she explains, old Western European socks are knit from the cuff of the sock down, whereas old Middle Eastern and East European socks are knit from the toe of the sock up – which is also how Eastern and Northern European naalbound socks were shaped. Baronness Rhiall thinks Western Europe invented its sockmaking techniques independently based only having had a little experience with a few late 1200s/1300s knit pieces from Moorish artisans.

What about tools?

Here’s my best guess: The Egyptian tubes were made on knitting looms.

The viking tubes were invented separately, made with a metal hook as stringbed speculates, and never had any particular connection to knitting yarn.

At some point, in the Middle East, someone figured out knitting needles. The Egyptian socks and Estonian mitten and most other things were knit in the round on double-ended needles.

I don’t like this as an explanation, mostly because of how it posits 3 separate tools involved in the earliest knit structures – that seems overly complicated. But it’s what I’ve got.

Knitting in the tracks of naalbinding

I don’t know if this is anything, but here are some places we also find lots of naalbinding, beginning from well before the medieval period: Egypt. Oman. The UAE. Syria. Israel. Denmark. Norway. Sweden. Sort of the same path that we predict knitting traveled in.

I don’t know what I’m looking at here.

  • Maybe this isn’t real and this places just happen to preserve textiles better
  • Longstanding trade or migration routes between North Africa, the Middle East, and Eastern Europe?
  • Culture of innovation in fiber?
  • Maybe fiber is more abundant in these areas, and thus there was more affordance for experimenting. (See below.)

It might be a coincidence. But it’s an odd coincidence, if so.

Why did it take so long for someone to invent knitting?

This is the question I set out to answer in the initial post, but then it turned into a whole thing and I don’t think I ever actually answered my question. Very, very speculatively: I think knitting is just so complicated that it took thousands of years, and an environment rich in fiber innovation, for someone to invent and make use of the series of steps that is knitting.

Take this next argument with a saltshaker, but: my intuitions back this up. I have a good visual imagination. I can sort of “get” how a slip knot works. I get sewing. I understand weaving, I can boil it down in my mind to its constituents.

There are birds that do a form of sewing and a form of weaving. I don’t want to imply that if an animal can figure it out, it’s clearly obvious – I imagine I’d have a lot of trouble walking if I were thrown into the body of a centipede, and chimpanzees can drastically outperform humans on certain cognitive tasks – but I think, again, it’s evidence that it’s a simpler task in some sense.

Same with sprang. It’s not a process I’m familiar with, but watching Sally Pointer do it on a very primitive loom, I can see understand it and could probably do it now. Naalbinding – well, it’s knots, and given a needle and knowing how to make a knot, I think it’s pretty straightforward to tie a bunch of knots on top of each other to make fabric out of it.

But I’ve been knitting for quite a while now and have finished many projects, and I still can’t say I totally get how knitting works. I know there’s a series of interconnected loops, but how exactly they don’t fall apart? How the starting string turns into the final project? It’s not in my head. I only know the steps.

I think that if you erased my memory and handed me some simple tools, especially a loom, I could figure out how to make cloth by weaving. I think there’s also a good chance I could figure out sprang, and naalbinding. But I think that if you handed me knitting needles and string – even if you told me I was trying to get fabric made from a bunch of loops that are looped into each other – I’m not sure I would get to knitting.

(I do feel like I might have a shot at figuring out crochet, though, which is supposedly younger than any of these anyway, so maybe this whole line of thinking means nothing.)

Idle hands as the mother of invention?

Why do we innovate? Is necessity the mother of invention?

This whole story suggests not – or at least, that’s not the whole story. We have the first knit structures in belts (already existed in other forms) and decorative silver wire (strictly ornamental.) We have knit socks from Egypt, not a place known for demanding warm foot protection. What gives?

Elizabeth Wayland Barber says this isn’t just knitting – she points to the spinning jenny and the power loom, both innovations in yarn production in general, that were invented recently by men despite thousands of previous years of women producing yarn. In Women’s Work: The First 20,000 Years, she writes:

“Women of all but the top social and economic classes were so busy just trying to get through what had to be done each day that they didn’t have excess time or materials to experiment with new ways of doing things.”

This speculates a kind of different mechanism of invention – sure, you need a reason to come up with or at least follow up on a discovery, but you also need the space to play. 90% of everything is crap, you need to be really sure that you can throw away (or unravel, or afford the time to re-make) 900 crappy garments before you hit upon the sock.

Bill Bryson, in the introduction to his book At Home, writes about the phenomenon of clergy in the UK in 1700s and 1800s. To become an ordained minister, one needed a university degree, but not in any particular subject, and little ecclesiastical training. Duties were light; most ministers read a sermon out of a prepared book once a week and that was about it. They were paid in tithes from local landowners. Bryson writes:

“Though no one intended it, the effect was to create a class of well-educated, wealthy people who had immense amounts of time on their hands. In conesquence many of them began, quite spontaneously, to do remarkable things. Never in history have a group of people engaged in a broader range of creditable activities for which they were not in any sense actually employed.”

He describes some of the great amount of intellectual work that came out of this class, including not only the aforementioned power loom, but also: scientific descriptions of dinosaurs, the first Icelandic dictionary, Jack Russel terriers, submarines aerial photography, the study of archaeology, Malthusian traps, the telescope that discovered Uranus, werewolf novels, and – courtesy of the original Thomas Bayes – Bayes’ theorem.

I offhandedly posited a random per-person effect in the previous post – each individual has a chance of inventing knitting, so eventually someone will figure it out. There’s no way this can be the whole story. A person in a culture that doesn’t make clothes mostly out of thread, like the traditional Inuit (thread is used to sew clothes, but the clothes are very often sewn out of animal skin rather than woven fabric) seems really unlikely to invent knitting. They wouldn’t have lots of thread about to mess around with. So you need the people to have a degree of familiarity with the materials. You need some spare resources. Some kind of cultural lenience for doing something nonstandard.

…But is that the whole story? The Incan Empire was enormous, with 12,000,000 citizens at its height. They didn’t have a written language. They had the quipu system for recording numbers with knotted string, but they didn’t have a written language. (Their neighbors, the Mayans, did.) Easter Island, between its colonization by humans in 1000 CE and its worse colonization by Europeans in 1700 CE, had a maximum population of maybe 12,000. It’s one of the most remote islands in the world. In isolation from other societies, they did develop a written language, in fact Polynesia’s only native written language.

Color photo of a worn wooden tablet engraved with intricate Rongorongo characters.
One of ~26 surviving pieces of Rongorongo, the undeciphered written script of Easter Island. This is Text R, the “Small Washington tablet”. Photo from the Smithsonian Institution. (Image rotated to correspond with the correct reading order, as a courtesy to any Rongorongo readers in my audience. Also, if there are any Rongorongo readers in my audience, please reach out. How are you doing that?!)
A black and white photo of the same tablet. The lines of characters are labelled (e.g. Line 1, Line 2) and the  symbols are easier to see. Some look like stylized humans, animals, and plants.
The same tablet with the symbols slightly clearer. Image found on kohaumoto.org, a very cool Rongorongo resource.

I don’t know what to do with that.

Still. My rough model is:

A businessy chart labelled "Will a specific group make a specific innovation?" There are three groups of factors feeding into each other. First is Person Factors, with a picture of a person in a power wheelchair: Consists of [number of people] times [degree of familiarity with art]. Spare resources (material, time). And cultural support for innovation. Second is Discovery Factors, with a picture of a microscope: Consists of how hard the idea "is to have", benefits from discovery, and [technology required] - [existing technology]. ("Existing technology" in blue because that's technically a person factor.) Third is Special Sauce, with a picture of a wizard. Consists of: Survivorship Bias and The Easter Island Factor (???)

The concept of this chart amused me way too much not to put it in here. Sorry.

(“Survivorship bias” meaning: I think it’s safe to say that if your culture never developed (or lost) the art of sewing, the culture might well have died off. Manipulating thread and cloth is just so useful! Same with hunting, or fishing for a small island culture, etc.)

…What do you mean Loopholes has articles about the history of the autoharp?! My Renaissance man aspirations! Help!


Delightful: A collection of 1900’s forgeries of the Paracas textile. They’re crocheted rather than naalbound.

1 (Uh, usually. You can finger weave with just a stick or two to anchor some yarn to but it wasn’t widespread, possibly because it’s hard to make the cloth very wide.)

2 I had this whole thing ready to go about how a knit belt was ridiculous because a knit tube isn’t actually very stretchy “vertically” (or “warpwise”), and most of its stretch is “horizontal” (or “weftwise”). But then I grabbed a knit tube (fingerless glove) in my environment and measured it at rest and stretched, and it stretched about as far both ways. So I’m forced to consider that a knit belt might be reasonable thing to make for its stretchiness. Empiricism: try it yourself!

3 Fun fact: Plant-based fibers (cotton, linen, etc) are mostly made of carbohydrates. Animal-based fibers (silk, wool, alpaca, etc) and leather are mostly made of protein. Fens are wetlands that are alkaline and bogs are acidic. Carbohydrates decay in acidic bogs but are well-preserved in alkaline fens. Proteins dissolve in alkaline environments fens but last in acidic bogs. So it’s easier to find preserved animal material or fibers in bogs and preserved plant material or fibers in fens.


Cross-posted to LessWrong.

Fiber arts, mysterious dodecahedrons, and waiting on “Eureka!”

Part 1: The anomaly

This story starts, as many stories do, with my girlfriend 3D-printing me a supernatural artifact. Specifically, one of my favorite SCPs, SCP-184.

This attempt got about 75% of the way through. Close enough.

We had some problems with the print. Did the problems have anything to do with printing a model of a mysterious artifact that makes spaces bigger on the inside, via a small precisely-calibrated box? I would say no, there’s no way that be related.

Anyway, the image used for the SCP in question, and thus also the final printed model, is based a Roman dodecahedron. Roman dodecahedrons are a particular shape of metal object that have been dug up in digs from all over the Roman period, and we have no idea why they exist.

Roman dodecahedra. || Image source unknown.

Many theories have been advanced. You might have seen these in an image that was going around the internet, which ended by suggesting that the object would work perfectly for knitting the fingers of gloves.

There isn’t an alternative clear explanation for what these are. A tool for measuring coins? A ruler for calculating distances? A sort of Roman fidget spinner? This author thinks it displays a date and has a neat explanation as for why. (Experimental archaeology is so cool, y’all.)

Whatever the purpose of the Roman dodecahedron was, I’m pretty sure it’s not (as the meme implies is obvious) for knitting glove fingers.1

Why?

1: The holes are always all different sizes, and you don’t need that to make glove fingers.

2: You could just do this with a donut with pegs in it, you don’t need a precisely welded dodecahedron. It does work for knitting glove fingers, you just don’t need something this complicated.

3: The Romans hadn’t invented knitting.

Part 2: The Ancient Romans couldn’t knit

Wait, what? Yeah, the Romans couldn’t knit. The Ancient Greeks couldn’t knit, the Ancient Egyptians couldn’t knit. Knitting took a while to take off outside of the Middle East and the West, but still, almost all of the Imperial Chinese dynasties wouldn’t have known how. Knitting is a pretty recent invention, time-wise. The earliest knit objects we have are from Egypt around 1000 CE.

Possibly the oldest knit sock known, ca 1000-1200 CE according to this page. || Photo is public domain from the George Washington University Textile Museum Collection.

This is especially surprising because knitting is useful for two big reasons:

First, it’s very easy to do. It takes yarn and two sticks and children can learn how. This is pretty rare for fabric manufacturing – compare, for instance, weaving, which takes an entire loom.

Sidenote: Do you know your fabrics? This next section will make way more sense if you do.

Woven fabricKnit fabric
Commonly found in: trousers, collared/button up shirts, bedsheets, dish towels, woven boxers, quilts, coats, etc.
Not stretchy.
Loose threads won’t make the whole cloth unravel.
Commonly found in: T-shirts, polo shirts, leggings, underwear, anything made of jersey fabric, sweaters, sweatpants, socks.
Stretchy.
If you pull on a lose thread, the cloth unravels.

Second, and oft-underappreciated, knitted fabric is stretchy. We’re spoiled by the riches of elastic fabric today, but it wasn’t always so. Modern elastic fabric uses synthetic materials like spandex or neoprene; the older version was natural latex rubber, and it seems to have taken until the 1800s to use rubber to make clothing stretchy. Knit fabric stretches without any of those.

Before knitting, your options were limited – you could only make clothing that didn’t stretch, which I think explains a lot of why medieval and earlier clothing “looks that way”. A lot of belts and drapey fabric. If something is form-fitting, it’s probably laced. (…Or just more-closely tailored, which unrelatedly became more of a thing later in the medieval period.)

You think these men had access to comfortable elastic waistlines? No they did not. || Image from the Luttrell Psalter, ~1330.

You could also use woven fabric on the bias, which stretches a little.

Woven fabric is stretchier this way. Grab something made of woven fabric and try it out. || Image by PKM, under a CC BY-SA 3.0 license.

Medieval Europe made stockings from fabric cut like this. Imagine a sock made out of tablecloth or button-down-shirt-type material. Not very flexible. Here’s a modern recreation on Etsy.

Other kinds of old “socks” were more flexible but more obnoxious, made of a long strip of bias-cut fabric that you’d wrap around your feet. (Known as: winingas, vindingr, legwraps, wickelbänder , or puttees.) Historical reenactors wear these sometimes. I’m told they’re not flexible and restrict movement, and that they take practice to put on correctly.

Come 1000 CE, knitting arrives on the scene.

Which is to say, it’s no surprise that the first knitted garments we see are socks! They get big in Europe over the next 300 years or so. Richly detailed bags and cushions also appear. We start seeing artistic depictions of knitting for the first time around now too.

Italian Madonna knitting with four needles, ~1350. Section of this miniature by Tommaso de Modena.

Interestingly, this early knitting was largely circular, meaning that you produce a tube of cloth rather than a flat sheet. This meant that the first knitting was done not with two sticks and some yarn, but four sticks and some yarn. This is much easier for making socks and the like than using two needles would be. …But also means that the invention process actually started with four needles and some yarn, so maybe it’s not surprising it took so long.2

(Why did it take so long to invent knitting flat cloth with two sticks? Well, there’s less of a point to it, since you already have lots of woven cloth, and you can do a lot of clothes – socks, sweaters, hats, bags – by knitting tubes. Also, by knitting circularly, you only have to know how to do one stitch (the knit stitch) whereas flat knitting requires you also use a different stitch (the perl stitch) to make a smooth fabric that looks like and is as stretchy as round knitting. If you’re not a knitter, just trust me – it’s an extra step.)

(You might also be wondering: What about crochet? Crochet was even more recent. 1800s.)

Part 3: The Ancient Peruvians couldn’t knit either, but they did something that looks the same

You sometimes see people say that knitting is much older, maybe thousands of years old. It’s hard to tell how old knitting is – fabric doesn’t always preserve well – but it’s safe to say that it’s not that old. We have examples of people doing things with string for thousands of years, but no examples of knitting before those 1000 CE socks. What we do have examples of is naalbinding, a method of making fabric from yarn using a needle. Naalbinding produces a less-stretchy fabric than knitting. It’s found from Scandinavia to the Middle East and also shows up in Peru.

The native Peruvian form of naalbinding is a specific technique called cross-knit looping. (This technique also shows up sometimes in pre-American Eurasia, but it’s not common.) The interesting thing about cross-knit looping is that the fabric looks almost identical to regular knitting.

Here’s a tiny cross-knit-looped bag I made, next to a tiny regularly knit bag I made. You can see they look really similar. The fabric isn’t truly identical if you look closely (although it’s close enough to have fooled historians). It doesn’t act the same either – naalbound fabric is less stretchy than knit fabric, and it doesn’t unravel.

The ancient Peruvians cross-knit-looped decorations for other garments and the occasional hat, not socks.

Cross-knit-looped detail from the absolutely stunning Paracas Textile. If you look closely, it looks like stockinette knit fabric, but it’s not.

Inspired by the Paracas Textile figures above, I used cross-knit-looping to make this little fox lady fingerpuppet:

I think it was easier to do fine details than it would be if I were knitting – it felt more like embroidery – but it might have been slower to make the plain fabric parts than knitting would have been. But I’ve done a lot of knitting and very little cross-knit-looping, so it’s hard to compare directly. If you want to learn how to do cross-knit looping yourself, Donna Kallner on Youtube has handy instructional videos.

I wondered about naalbinding in general – does the practice predate human dispersal to the Americas, or did the Eurasian technique and the American technique evolve separately? Well, I don’t know for certain. Sewing needles and working with yarn are old old practices, definitely pre-dating the hike across Beringia (~18,000 BCE). The oldest naalbinding is 6500 years old, so it’s possible – but as far as I know, no ancient naalbinding has every been found anywhere in the Americas outside of Peru, or in eastern Russia or Asia – it was mostly the Middle East and Europe, and then, also, separately, Peru. The process of cross-knit looping shares some similarities with net-making and basket-weaving, so it doesn’t seem so odd to me that the process was invented again in Peru.

For a while, I thought, it’s even weirder that the Peruvians didn’t get to knitting – they were so close, they made something that looks so similar. But cross-knit-looping doesn’t actually particularly share any other similarities with knitting more than naalbinding or even more common crafts like basketweaving or weaving – the tools are different, the process is different, etc.

So the question should be the same for the Romans or any other other culture with yarn and sticks before 1000 AD: why didn’t they invent knitting? They had all the pieces. …Didn’t they?

Yeah, I think they did.

Part 4: Many stones can form an arch, singly none

Let’s jump topics for a second. In Egypt, a millenium before there were knit socks, there was the Library of Alexandria. Zenodotus, the first known head librarian at the Library of Alexandria, organized lists of words and probably the library’s books by alphabetical order. He’s the first person we know of to alphabetize books with this method, somewhere around 300 BCE.

Then, it takes 500 years before we see alphabetization of books by the second letter.3

The first time I heard this, I thought: Holy mackerel. That’s a long time. I know people who are very smart, but I’m not sure I know anyone smart enough to invent categorizing things by the second letter.

But. Is that true? Let’s do some Fermi estimates. The world population was 1.66E8 (166 million) in 500 BCE and 2.02E8 (202 million) in 200 CE. But only a tiny fraction would have had access to books, and only a fraction of those in the western alphabet system. (And of course, people outside of the Library of Alexandria with access to books could have done it and we just wouldn’t know, because that fact would have been lost – but people have actually studied the history of alphabetization and do seem to treat this as the start of alphabetization as a cultural practice, so I’ll carry on.)

For this rough estimate, I’ll average the world population over that period to 2E8. Assuming a 50 year lifespan, that’s 10 lifespans and thus 2E10 people living in the window. If only one in a thousand people would have been in a place to have the idea and have it recognized (e.g. access to lots of books), that’s 1 in 2E7 people, or 1 in 20 million. That’s suddenly not unreachable. Especially since I think “1 in 1,000 ‘being able to have the idea’” might be too high – and if it’s more like “1 in 10,000” or lower, the end number could be more like 1 in 1 million. I might actually know people who are 1 in 1 million smart – I have smart friends. So there’s some chance I know someone smart enough to have invented “organizing by the second letter of the alphabet”.

Sidenote: Ancient bacteria couldn’t knit

A parallel in biology: Some organisms emit alcohol as a waste product. For thousands of years, humans have been concentrating alcohol in one place to kill bacteria. (… Okay, not just to kill bacteria.) From 2005 to 2015, some bacteria have been getting 10x resistant to alcohol.

Isn’t it strange that this is only happened in the last 10 years? This question actually lead, via a winding path, to the idea that became my Funnel of Human Experience blog post. I forgot to answer the question, but suffice to say that if alcohol production is in some way correlated&&& with the human population, 10 years is more significant but still not very much.

And yet, alcohol resistance seems to have involved in Enterococcus faecium only recently. The authors postulate the spread of alcohol handwashing. Seems as plausible as anything. Or maybe it’s just difficult to evolve.

Knitting continues to interest me, because a lot of examples of innovation do rely heavily on what came before. To have invented organizing books by the second letter of the alphabet, you have to have invented organizing books by the first letter of the alphabet, and also know how to write, and have access to a lot of books for the second letter to even matter.

The sewing machine was invented in 1790 CE and improved drastically over the next 60 years, where it became widely used to automate a time-consuming and extremely common task. We could ask: “But why wasn’t the sewing machine invented earlier, like in 1500 CE?”

But we mostly don’t, because to invent a sewing machine, you also need very finely machined gears and other metal parts, and that technology also came up around the industrial revolution. You just couldn’t have made a reliable sewing machine in 1500 CE, even if you had the idea – you didn’t have all the steps. In software terms, as a technology, sewing machines have dependencies. Thus, the march of human progress, yada yada yada.

But as far as I can tell, you had everything that went into knitting for thousands of years beforehand. You had sticks, you had yarn, you had the motivation. Knitting doesn’t have dependencies after that. And you had brainpower: people in the past everywhere were making fiber into yarn and yarn into clothing all of the time, seriously making clothes from scratch takes so much time.

And yet, knitting is very recent. That was so big of a leap that it took thousands of years for someone to figure it out.


UPDATE: see the follow-up to this post with more findings from the earliest days of knitting, crochet, sprang, etc.


1 I’m not displaying the meme itself in this otherwise image-happy post because if I do, one of my friends will read this essay and get to the meme but stop reading before they get to the part where I say the meme is incorrect. And then the next time we talk, they’ll tell me that they read my blog post and liked that part where a Youtuber proved that this mysterious Roman artifact was used to knit gloves, and hah, those silly historians! And then I will immediately get a headache.

2 Flexible circular knitting needles for knitting tubes are, as you might guess, also a more modern invention. If you’re in the Medieval period, it’s four sticks or bust.

3 My girlfriend and I made a valiant attempt to verify this, including squinting at some scans of fragments from Ancient Greek dictionaries written on papyrus from Papyri.info – which is, by the way, easily one of the most websites of all time. We didn’t make much headway.

The dictionaries or bibliographies we found on papyrus seem to be ordered completely alphabetically, but even those “source texts” were copies from ~1500 CE or that kind of thing, of much older (~200 CE) texts. So those texts we found might have been alphabetized by the copiers. Also, neither of us know Ancient Greek, which did not help matters.

Ultimately, this citation about both primary and secondary alphabetization seems to come from Lloyd W. Daly’s well-regarded 1967 book Contributions to a history of alphabetization in Antiquity and the Middle Ages, which I have not read. If you try digging further, good luck and let me know what you find.

[Crossposted to LessWrong.]

There’s no such thing as a tree (phylogenetically)

So you’ve heard about how fish aren’t a monophyletic group? You’ve heard about carcinization, the process by which ocean arthropods convergently evolve into crabs? You say you get it now? Sit down. Sit down. Shut up. Listen. You don’t know nothing yet.

“Trees” are not a coherent phylogenetic category. On the evolutionary tree of plants, trees are regularly interspersed with things that are absolutely, 100% not trees. This means that, for instance, either:

  • The common ancestor of a maple and a mulberry tree was not a tree.
  • The common ancestor of a stinging nettle and a strawberry plant was a tree.
  • And this is true for most trees or non-trees that you can think of.

I thought I had a pretty good guess at this, but the situation is far worse than I could have imagined.

CLICK TO EXPAND. Partial phylogenetic tree of various plants. TL;DR: Tan is definitely, 100% trees. Yellow is tree-like. Green is 100% not a tree. Sourced mostly from Wikipedia.

I learned after making this chart that tree ferns exist (h/t seebs), which I think just emphasizes my point further. Also, h/t kithpendragon on LW for suggestions on increasing accessibility of the graph.

Why do trees keep happening?

First, what is a tree? It’s a big long-lived self-supporting plant with leaves and wood.

Also of interest to us are the non-tree “woody plants”, like lianas (thick woody vines) and shrubs. They’re not trees, but at least to me, it’s relatively apparent how a tree could evolve into a shrub, or vice-versa. The confusing part is a tree evolving into a dandelion. (Or vice-versa.)

Wood, as you may have guessed by now, is also not a clear phyletic category. But it’s a reasonable category – a lignin-dense structure, usually that grows from the exterior and that forms a pretty readily identifiable material when separated from the tree. (…Okay, not the most explainable, but you know wood? You know when you hold something in your hand, and it’s made of wood, and you can tell that? Yeah, that thing.)

All plants have lignin and cellulose as structural elements – wood is plant matter that is dense with both of these.

Botanists don’t seem to think it only could have gone one way – for instance, the common ancestor of flowering plants is theorized to have been woody. But we also have pretty clear evidence of recent evolution of woodiness – say, a new plant arrives on a relatively barren island, and some of the offspring of that plant becomes treelike. Of plants native to the Canary Islands, wood independently evolved at least 38 times!

One relevant factor is that all woody plants do, in a sense, begin life as herbaceous plants – by and large, a tree sprout shares a lot of properties with any herbaceous plant. Indeed, botanists call this kind of fleshy, soft growth from the center that elongates a plant “primary growth”, and the later growth from towards the outside which causes a plant to thicken is “secondary growth.” In a woody plant, secondary growth also means growing wood and bark – but other plants sometimes do secondary growth as well, like potatoes in their roots.

This paper addresses the question. I don’t understand a lot of the closely genetic details, but my impression of its thesis is that: Analysis of convergently-evolved woody plants show that the genes for secondary woody growth are similar to primary growth in plants that don’t do any secondary growth – even in unrelated plants. And woody growth is an adaption of secondary growth. To abstract a little more, there is a common and useful structure in herbaceous plants that, when slightly tweaked, “dendronizes” them into woody plants.

Dendronization – Evolving into a tree-like morphology. (In the style of “carcinization“.) From ‘dendro‘, the ancient Greek root for tree.

Can this be tested? Yep – knock out a couple of genes that control flower development and change the light levels to mimic summer, and researchers found that Arabidopsis rock cress, a distinctly herbaceous plant used as a model organism – grows a woody stem never otherwise seen in the species.

The tree-like woody stem (e) and morphology (f, left) of the gene-altered Aridopsis, compared to its distinctly non-tree-like normal form (f, right.) Images from Melzer, Siegbert, et al. “Flowering-time genes modulate meristem determinacy and growth form in Arabidopsis thaliana.” Nature genetics 40.12 (2008): 1489-1492.

So not only can wood develop relatively easily in an herbal plant, it can come from messing with some of the genes that regulate annual behavior – an herby plant’s usual lifecycle of reproducing in warm weather, dying off in cool weather. So that gets us two properties of trees at once: woodiness, and being long-lived. It’s still a far cry from turning a plant into a tree, but also, it’s really not that far.

To look at it another way, as Andrew T. Groover put it:

“Obviously, in the search for which genes make a tree versus a herbaceous plant, it would be folly to look for genes present in poplar and absent in Arabidopsis. More likely, tree forms reflect differences in expression of a similar suite of genes to those found in herbaceous relatives.”

So: There are no unique “tree” genes. It’s just a different expression of genes that plants already use. Analogously, you can make a cake with flour, sugar, eggs, sugar, butter, and vanilla. You can also make frosting with sugar, butter, and vanilla – a subset of the ingredients you already have, but in different ratios and use.

But again, the reverse also happens – a tree needs to do both primary and secondary growth, so it’s relatively easy for a tree lineage to drop the “secondary” growth stage and remain an herb for its whole lifespan, thus “poaizating.” As stated above, it’s hypothesized that the earliest angiosperms were woody, some of which would have lost that in become the most familiar herbaceous plants today. There are also some plants like cassytha and mistletoe, herbaceous plants from tree-heavy lineages, who are both parasitic plants that grow on a host tree. Knowing absolutely nothing about the evolution of these lineages, I think it’s reasonable to speculate that they each came from a tree-like ancestor but poaized to become parasites. (Evolution is very fond of parasites.)

Poaization: Evolving into an herbaceous morphology. From ‘poai‘, ancient Greek term from Theophrastus defining herbaceous plants (“Theophrastus on Herbals and Herbal Remedies”).

(I apologize to anyone I’ve ever complained to about jargon proliferation in rationalist-diaspora blog posts.)

The trend of staying in an earlier stage of development is also called neotenizing. Axolotls are an example in animals – they resemble the juvenile stages of the closely-related tiger salamander. Did you know very rarely, or when exposed to hormone-affecting substances, axolotls “grow up” into something that looks a lot like a tiger salamander? Not unlike the gene-altered Arabidopsis.

A normal axolotl (left) vs. a spontaneously-metamorphosed “adult” axolotl (right.)

[Photo of normal axolotl from By th1098 – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=30918973. Photo of metamorphosed axolotl from deleted reddit user, via this thread: https://www.reddit.com/r/Eyebleach/comments/etg7i6/this_is_itzi_he_is_a_morphed_axolotl_no_thats_not/ ]

Does this mean anything?

A friend asked why I was so interested in this finding about trees evolving convergently. To me, it’s that a tree is such a familiar, everyday thing. You know birds? Imagine if actually there were amphibian birds and mammal birds and insect birds flying all around, and they all looked pretty much the same – feathers, beaks, little claw feet, the lot. You had to be a real bird expert to be able to tell an insect bird from a mammal bird. Also, most people don’t know that there isn’t just one kind of “bird”. That’s what’s going on with trees.


I was also interested in culinary applications of this knowledge. You know people who get all excited about “don’t you know a tomato is a fruit?” or “a blueberry isn’t really a berry?” I was one once, it’s okay. Listen, forget all of that.

There is a kind of botanical definition of a fruit and a berry, talking about which parts of common plant anatomy and reproduction the structure in question is derived from, but they’re definitely not related to the culinary or common understandings. (An apple, arguably the most central fruit of all to many people, is not truly a botanical fruit either).

Let me be very clear here – mostly, this is not what biologists like to say. When we say a bird is a dinosaur, we mean that a bird and a T. rex share a common ancestor that had recognizably dinosaur-ish properties, and that we can generally point to some of those properties in the bird as well – feathers, bone structure, whatever. You can analogize this to similar statements you may have heard – “a whale is a mammal”, “a spider is not an insect”, “a hyena is a feline”…

But this is not what’s happening with fruit. Most “fruits” or “berries” are not descended from a common “fruit” or “berry” ancestor. Citrus fruits are all derived from a common fruit, and so are apples and pears, and plums and apricots – but an apple and an orange, or a fig and a peach, do not share a fruit ancestor.

Instead of trying to get uppity about this, may I recommend the following:

  • Acknowledge that all of our categories are weird and a little arbitrary
  • Look wistfully of pictures of Welwitschia
  • Send a fruit basket to your local botanist/plant evolutionary biologist for putting up with this, or become one yourself
While natural selection is commonly thought to simply be an ongoing process with no “goals” or “end points”, most scientists believe that life peaked at Welwitschia.

[Photo from By Sara&Joachim on Flickr – Flickr, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=6342924 ]

Some more interesting findings:

  • A mulberry (left) is not related to a blackberry (right). They just… both did that.
[ Mulberry photo by Cwambier – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=63402150. Blackberry photo by By Ragesoss – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=4496657. ]
  • Avocado and cinnamon are from fairly closely-related tree species.
  • It’s possible that the last common ancestor between an apple and a peach was not even a tree.
  • Of special interest to my Pacific Northwest readers, the Seattle neighborhood of Magnolia is misnamed after the local madrona tree, which Europeans confused with the (similar-looking) magnolia. In reality, these two species are only very distantly related. (You can find them both on the chart to see exactly how far apart they are.)
  • None of [cactuses, aloe vera, jade plants, snake plants, and the succulent I grew up knowing as “hens and chicks”] are related to each other.
  • Rubus is the genus that contains raspberries, blackberries, dewberries, salmonberries… that kind of thing. (Remember, a genus is the category just above a species – which is kind of a made-up distinction, but suffice to say, this is a closely-related groups of plants.) Some of its members have 14 chromosomes. Some of its members have 98 chromosomes.
  • Seriously, I’m going to hand $20 in cash to the next plant taxonomy expert I meet in person. God knows bacteriologists and zoologists don’t have to deal with this.

And I have one more unanswered question. There doesn’t seem to be a strong tend of plants evolving into grasses, despite the fact that grasses are quite successful and seem kind of like the most anatomically simple plant there could be – root, big leaf, little flower, you’re good to go. But most grass-like plants are in the same group. Why don’t more plants evolve towards the “grass” strategy?


Let’s get personal for a moment. One of my philosophical takeaways from this project is, of course, “convergent evolution is a hell of a drug.” A second is something like “taxonomy is not automatically a great category for regular usage.” Phylogenetics are absolutely fascinating, and I do wish people understood them better, and probably “there’s no such thing as a fish” is a good meme to have around because most people do not realize that they’re genetically closer to a tuna than a tuna is to a shark – and “no such thing as a fish” invites that inquiry.

(You can, at least, say that a tree is a strategy. Wood is a strategy. Fruit is a strategy. A fish is also a strategy.)

At the same time, I have this vision in my mind of a clever person who takes this meandering essay of mine and goes around saying “did you know there’s no such thing as wood?” And they’d be kind of right.

But at the same time, insisting that “wood” is not a useful or comprehensible category would be the most fascinatingly obnoxious rhetorical move. Just the pinnacle of choosing the interestingly abstract over the practical whole. A perfect instance of missing the forest for – uh, the forest for …

… Forget it.


Related:

Timeless Slate Star Codex / Astral Codex Ten piece: The categories were made for man, not man for the categories.

Towards the end of writing this piece, I found that actual botanist Dan Ridley-Ellis made a tweet thread about this topic in 2019. See that for more like this from someone who knows what they’re talking about.

For more outraged plant content, I really enjoy both Botany Shitposts (tumblr) and Crime Pays But Botany Doesn’t (youtube.)

[Crossposted to Lesswrong.]

Internet Harvest (2020, 3)

Internet Harvest is a selection of the most succulent links on the internet that I’ve recently plucked from its fruitful boughs. Feel free to discuss the links in the comments.


You know how you can “fake write” on a page, and produce a line of ink with a pen that looks kind of like words but isn’t really? There’s a name for that: Asemic writing.

Most images of butterflies you see represent dead butterflies – pinned to better show the wings, and in a posture they rarely are found in nature. Once you notice the difference, you’ll see this everywhere. I originally found this article at least a year ago and I’ve thought about it every time I see a picture of a butterfly.

nabeelqu on understanding: This is a great article about what it takes to understand things. I really, highly endorse “ask dumb questions” as a step for understanding things.

The rate at which new genetic sequences are added to GenBank (an international database for genetics, relied on heavily by biologists) follows Moore’s Law. I have no idea what this implies. [Source]

Niche subreddit of the day: r/VisibleMending. For mending clothing and more with visible, often lovely repairs.

Take a look at the machine that synthesized the voice for number stations. (H/T Nova)

There are a lot of ways to learn about the wonders of aquatic ecosystems. You can watch documentaries, like the Blue Planet or Shape of Life series. You can watch videos from ocean exploration projects, like the EV Nautilus youtube channel. You can go scuba diving, or just go to a beach and whalewatch and collect shells. You can go tidepooling.

Or you can grab a bunch of sand and algae and seaweed and put it in a big jar, seal the lid, and leave it alone for a year, and see what kind of weird guys emerge from it.

Second niche subreddit of the week: r/FridgeDetective, where you post a picture of the inside of your fridge, and other people try and make deductions about you based on it.

Covid Dash is a project for tracking progress on treatments and countermeasures for COVID-19, and finding out where you can volunteer to help with clinical trials.

The vast majority of the ocean is completely lightless. Fish react by evolving to be extremely, extremely black.

I’ve been trying to use Twitter more. It turns out that the only good Twitters are “Can You Violate The Geneva Conventions [In Different Video Games]” and Internet of Shit. Paul Bae’s twitter, Malcolm Ocean’s twitter, Tom Inglesby’s twitter, Dril’s twitter, qntm’s twitter, and Rebecca R. Helm’s twitter are also quality. Elisabeth Bik’s twitter is good if you like hot gossip on academic mispractice. Everyone else’s twitter, including mine, is superfluous and probably doing more harm than good.

Finally, a request: I really like sidenotes or margin notes on websites, like Gwern’s. Does anyone know of a blogging or general website platform that currently allows these without being totally handbuilt? I’m not ready for fiddling with CSS yet.