Through the Looking Glass, and What Zheludev et al. (2024) Found There. By Georgia Ray. Every time microbiologists develop a new way of looking, they find that there's more to see than they expected.

Eukaryote writes for Asterisk Magazine

See my piece on the history of microbiology and the vast, invisible worlds that come into focus every time we figure out how to look closer:

Through the Looking Glass, and What Zheludev et al. (2024) Found There at Asterisk Magazine


I’ve written for Asterisk before: What I won’t eat, on arriving at an equilibrium on the “it’s bad when animals suffer” vs. “but animal products taste good” challenge.

Recommendation: reports on the search for missing hiker Bill Ewasko

Content warning: About an IRL death.

Today’s post isn’t so much an essay as a recommendation for two bodies of work on the same topic: Tom Mahood’s blog posts and Adam “KarmaFrog1” Marsland’s videos on the 2010 disappearance of Bill Ewasko, who went for a day hike in Joshua Tree National Park and dropped out of contact.

2010 – Bill Ewasko goes missing

2022 – Ewasko’s body found

And then if you’re really interested, there’s a little more info that Adam discusses from the coroner’s report:

(I won’t be fully recounting every aspect of the story. But I’ll give you the pitch and go into some aspects I found interesting. Literally everything interesting here is just recounting their work, go check em out.)

Most ways people die in the wilderness are tragic, accidental, and kind of similar. A person in a remote area gets injured or lost, becomes the other one too, and dies of exposure, a clumsy accident, etc. Most people who die in the wilderness have done something stupid to wind up there. Fewer people die who have NOT done anything glaringly stupid, but it still happens, the same way. Ewasko’s case appears to have been one of these. He was a fit 66-year-old who went for a day hike and never made it back. His story is not particularly unprecedented.

This is also not a triumphant story. Bill Ewasko is dead. Most of these searches were made and reports written months and years after his disappearance. We now know he was alive when Search and Rescue started, but by months out, nobody involved expected to find him alive.

Ewasko was not found alive. In 2022, other hikers finally stumbled onto his remains in a remote area in Joshua Tree National Park; this was, largely, expected to happen eventually.

I recommend these particular stories, when we already know the ending, because they’re stunningly in-depth and well-written fact-driven investigations from two smart technical experts trying to get to the bottom of a very difficult problem. Because of the way things shook out, we get to see this investigation and changes in theories at multiple points: Tom Mahood has been trying to locate Ewasko for years and written various reports after search and search, finding and receiving new evidence, changing his mind, as has Adam, and then we get the main missing piece: finding the body. Adam visits the site and tries to put the pieces together after that.

Mahood and Adam are trying to do something very difficult in a very level-headed fashion. It is tragic but also a case study in inquiry and approaching a question rationally.

(They’re not, like, Rationalist rationalists. One of Mahood’s logs makes note of visiting a couple of coordinates suggested by remote viewers, AKA psychics. But the human mind is vast and full of nuance, and so was the search area, and on literally every other count, I’d love to see you do better.)

Unknowns and the missing persons case

Like I said, nothing mind-boggling happened to Ewasko. But to be clear, by wilderness Search and Rescue standards, Ewasko’s case is interesting for a couple reasons:

First, Ewasko was not expected to be found very far away. He was a 65-year-old on a day hike. But despite an early and continuous search, the body was not found for over a decade.

Second, two days after he failed to make a home-safe call to his partner and was reported missing, a cell tower reported one ping from his cell phone. It wasn’t enough to triangulate his location, but the ping suggested that the phone was on in a radius of approximately 10.6 miles around a specific cell tower. The nearest point of that radius was, however, miles in the opposite direction from the nearest likely trail destination to Ewasko’s car – from where Ewasko ought to be.

A detailed map of Joshua Tree national park. Main points of interest are a few scattered areas all over the park that we know Ewasko was interested in visiting. In the middle of it is a parking lot, Juniper Flats, where Ewasko's car was found. About three miles to the northeast is Quail Mountain, another destination but one that's reachable by the trailhead where the car is - so maybe where he would have gone. But starting a couple miles northeast of THAT is the lower edge of a broad purple ring - this ring represents where a cell tower was pinged 2 days after last contact with Ewasko, suggesting that his phone was at a point within this arc.
The base for a decade of searching. Approximate overlays, info from Mahood and Adam’s work, over Joshua Tree National Park visitor map. 

If you’ve spent much time in wilderness areas in the US, you know that cell coverage is findable but spotty. You’ll often get reception on hills but not in valleys, or suchlike. There’s a margin for error on cell tower pings that depends on location. Also, in this case, Verizon (Ewasko’s carrier) had decent coverage in the area – so it’s kind of surprising, and possibly constrains his route, that his cell phone only would have pinged once.

All of this is very Bayesian: Ewasko’s cellphone was probably turned off for parts of his movement to save battery (especially before he realized he was in danger), maybe there was data that the cell carrier missed, etc, etc. But maybe it suggests certain directions of travel over others. And of course, to have that one signal that did go out, he has to have gotten to somewhere within that radius – again, probably.

How do you look for someone in the wilderness?

Search and rescue – especially if you are looking for something that is no longer actively trying to be found, like a corpse – is very, very arduous. In some ways, Joshua Tree National Park is a pretty convenient location to do search and rescue: there aren’t a lot of trees, the terrain is not insanely steep, you don’t have to deal with river or stream crossings, clues will not be swept away by rain or snow.

But it’s not that simple. The terrain in the area looks like this:

A desert landscape of rolling nested hills with shrubs small and large and a few spiky Joshua Trees dotted over it.
I haven’t been to Joshua Tree myself, but going from Adam’s videos, this is representative of the kind of terrain. || Photo in Joshua Tree National Park by Shane Burkhardt, under a CC BY-NC 2.0 license.

There are rocks, low obstacles, different kinds of terrain, hills and lines of sight, and enough shrubbery to hide a body.

A lot of the terrain looks very similar to other parts of the terrain. Also dotted about are washes made of long stretches of smooth sand, so the landscape is littered with features that look exactly like trails.

Also, environmentally, it’s hot and dry as hell, like “landscape will passively kill you”, and there are rattlesnakes and mountain lions.

When a search and rescue effort starts, they start by outlining the kind of area in which they think the person might plausibly be in. Natural features like cliffs can constrain the trails, as can things like roads, on the grounds that if a lost person found a road, they’d wait by the road. 

You also consider how long it’s been and how much water they have. Bill Ewasko was thought to have three bottles of water on him – under harsh and dry circumstances, that water becomes a leash, you can only go so far with what you have. A person on foot in the desert is limited in both time and distance by the amount of water they carry; once that water runs out, their body will drop in the area those parameters conscribe.

Starting from the closest, most likely places and moving out, searchers first hit up the trails and other clear points of interest. But once they leave the trail? Well, when they can, maybe they go out in an area-covering pattern, like this:

A topographical map overlaid with a GPS track. The GPS path evenly and methodically covers a small area.
Map by Tom Mahood of one of his search expeditions, posted here. The single-dashed line is the cellphone ping radius.

But in practice, that’s not always tenable. Maybe you can really plainly see from one part to another and visually verify there’s nothing there. Maybe this wouldn’t get you enough coverage, if there are obstacles in the way. There are mountains and cliff faces and rocky slopes to contend with. 

Also, it’s pretty hard to cover “all the trails”, since they connect to each other, and someone is really more likely to be near a trail than far away from a trail. Or you might have an idea about how they would have traveled – so do you do more covering-terrain searching, or do you check farther-out trails? In this process, searchers end up making a lot of judgment calls about what to prioritize, way more than you might expect.

You end up taking snaky routes like this:

Another topographical map overlaid with a GPS track. This one has a few overlaid with each other, but the active expedition is a snaking winding route around steep mountains, it is NOT visibly even and methodical.
Map by Tom Mahood, posted here. This is a zoom-in of a pretty small area. Blue was the ground covered in this single expedition, green and red are older search trails, and the long dashed line is the cellphone ping radius.

The initial, official Search and Rescue was called off after about a week, so the efforts Mahood records – most of which he is doing himself, or with some buddies – constitute basically every search that happened. He posts GPS maps too, of that day’s travels overlaid on past travels. You see him work outward, covering hundreds of miles, filling in the blank spots on the map.

Mahood is really good at both being methodical and explaining his reasoning for each expedition he makes, and where he thinks to look. It’s an absolutely fascinating read.

43 expeditions in, in December 2012, Mahood writes this:

A screenshot of a comments and map. The map is a zoomed in area of a BUNCH of GPS trails over time, filling in space all over about 6 square miles on the map where Ewasko might be, much of it overlapping or close to the cellphone ping radius. Up in a hill near the north corner, and just off the edge of where the latest trail goes, there is a purple dot. The text reads: "Comments:
At one point in my travels I reached the northerly summit of the free standing hill northerly of Samuelson’s Rocks. Looking southwest I could see the rugged slopes of Quail Mountain. Looking due west, I could see right into the month of Smith Water Canyon. Toward the north, Quail Wash flowed down toward the homes just beyond the limits of Joshua Tree National Park. I was looking at the entire playing field. I sat for a while, scanned the area with binoculars and thought about things. Knowing where we had been, where the original searchers had been and what we now know the cell phone ping means, I started to develop some new ideas for the next phase in searching. And one way or another, I suspect it will be the final phase. We’ll either find Bill or he’s not findable."
In this image, one map square is ~one mile.

The purple dot is my addition. This is where Ewasko’s body was found in 2022. Mahood wrote this about the same trip where (as far as I can tell) he came the closest any searcher ever got to finding Ewasko. Despite saying it was the end game, Mahood and associates mounted about 50 more trips. Hindsight is heartbreaking.

Making hindsight useful

Hindsight haunts this story in 2024. It’s hard to learn about something like this and not ask “what could have stopped this from happening?”

I found myself thinking, sort of automatically, “no, Ewasko, turn around here, if you turn around here you can still salvage this,” like I was planning some kind of cross-temporal divine intervention. That line of thinking is, clearly, not especially useful.

Maybe the helpful version of this question, or one of them, is: If I were Ewasko, knowing what Ewasko knew, what kind of heuristics should I have used that would have changed the outcome?

The answer is obviously limited by the fact that we don’t know what Ewasko did. There are some specifics, like that he didn’t tell his contacts very specific hiking plans. But he was also planning on a day hike at an established trailhead in a national park an hour outside of Palm Springs. Once he was up the trail, you’ll have to watch Adam’s video and draw your own conclusions (if Adam is even right.)

Mahood writes: “People seldom act randomly, they do what makes sense to them at the time at the specific location they are at.” 

And Adam says: “Most man-made disasters don’t spring from one bad decision but from a series of small, understandable mistakes that build on one another.”

Another question is: If I were the searchers, knowing what the searchers know, what could I have done differently that would have found the body faster?

Knowing how far away the body was found and the kind of terrain covered, I’m still out on this one.

How deep the search got

Moving parts include:

  • Concrete details about Ewasko (Ewasko’s level of fitness, his supplies, down to the particular maps he had, what his activities were earlier in the day)
  • Ewasko’s broader mindset (where he wanted to go at the outset, which tools he used to navigate trails, how much HE knew about the area)
  • Ewasko’s moment-to-moment experience (if he were at a particular location and wanted to hurry home, which route would he take? What if he were tired and low on water and recognized he was in an emergency? What plans might he make?) (This ties into the field of Search and Rescue psychology – people disoriented in the wilderness sometimes make predictable decisions.)
  • Physical terrain (which trails exist and where? How hard is it to get from places to place? What obstacles are there)
  • Weather (how much moonlight was there? How hard was travelling by night? How bad was the daytime heat?)
  • Electromagnetic terrain (where in the park has cell service?)
  • Electromagnetic interpretation (How reliable is one reported cell phone ping? If it is inaccurate, in which ways might it be inaccurate?)
  • Other people’s reports (the very early search was delayed because a ranger apparently just repeatedly didn’t see or failed to notice Ewasko’s car at a trailhead, and there were conflicting reports about which way it was parked. According to Adam and I think Mahood, it now seems now like the car was probably there the entire time it should have been, and it was probably just missed due to… regular human error. But if this is one of the few pieces of evidence you have, and it looks odd – of course it seems very significant.)
  • The search evolving over time (where has been looked in what ways before? And especially as the years pass on – some parts of the terrain are now extremely well-searched, not to mention are regularly used by regular hikers. What are the changes one of these searches missed somewhere, vs. that Ewasko is in a completely new part of the territory?)

I imagine that it would be really hard to choose to carry on with something like this. In this investigation, there was really no new concrete evidence between 2010 and 2022. As Mahood goes on, in each investigation, he adds the tracks to his map. Territory fills in – big swathes of trails, each of them. New models emerge, but by and large the only changing detail is just that you’ve checked some places now, and he’s somewhere you haven’t checked. Probably.

A hostile information environment

Another detail that just makes the work more impressive: Mahood is doing all these investigations mostly on his own, without help and with (as he sees it, although it’s my phrasing) dismissal and limited help from Joshua Tree National Park officials. The reason Mahood posted all of this on the internet was, as he describes it, throwing up his hands and trying to crowd-source it, asking for ideas.

Then after that – The internet has a lot of interested helpful people – I first ran into Mahood’s blog months ago via r/RBI (“Reddit Bureau of Investigation”) or /r/UnsolvedMysteries or one of those years ago. I love OSINT, I think Mahood doing what he did was very cool. But also on those sites and also in other places there are also a lot of out-there wackos. (I know, wackos on the internet. Imagine.) In fact there’s a whole conspiracy theory community called Missing 411 about unexplained disappearances in national parks, which attributes them vaguely to sinister and/or supernatural sources. I think that’s all probably full of shit, though I haven’t tried to analyze it.

Anyway, this case attracted a lot of attention among those types. Like: What if Bill Ewasko didn’t want to be found? What if someone wanted to kill him? What if the cellphone ping was left by as an intentional red herring? You run into words like “staged” or “enforced disappearance” or “something spooky” in this line of thought, so say nothing of run-of-the-mill suicide.

Look, we live in a world where people get kidnapped or killed or go to remote places to kill themselves sometimes, the probability is not zero. Also – and I apologize if this sounds patronizing to searchers, I mean it sympathetically – extended fruitless efforts like this seem like they could get maddening, that alternative explanations that all your assumptions are wrong would start looking really promising. Like you’re weaving this whole dubious story about how Ewasko might have gone down the one canyon without cell reception, climbing up and down hills in baking heat while out of water and injured – or there’s this other theory, waving its hands in the corner, going yeah, OR he’s just not in the park at all, dummy! 

Its apparent simplicity is seductive.

Mahood apparently never put much stock in these sort of alternate models of the situation; Adam thought it was seriously likely for a while. I think it’s fair to say that “Ewasko died hiking in the park, in a regular kind of way” was always the strongest theory, but it’s the easiest fucking thing in the world for me to say that in retrospect, right? I wasn’t out there looking.

Maps and territories

Adam presents a theory about Ewasko’s final course of travel. It’s a solid and kind of stunning explanation that relies on deep familiarity with many of the aforementioned moving factors of the situation, and I do want you to watch the video, so go watch his video. (Adam says Mahood disagrees with him about some of the specifics – Mahood at present hasn’t written more after the body was found, but he might at some point, so keep an eye out.)

I’ll just go talk a little about one aspect of the explanation: Adam suspects Ewasko got initially lost because of a discrepancy between the maps at the time and the on-the-ground trail situation. See, multiple trails run out of the trailhead Ewasko parked at and through the area he was lost in, including official park-made trails and older abandoned Jeep trails. 

Satellite view of parking lot off a road in the wilderness. Out of the parking lot, from the air, we see one faint curving foot trail, and on the other side of the lot, one very clear wide jeep trail.
Example of two trails coming out of the Juniper Flats trailhead where Ewasko’s car was parked. Adam thinks Ewasko could have taken the jeep trail and not even noticed the foot trail. | Adapted from Google Satellite footage from 2024. I made this image but this exact point was first made by Adam in his video.

Adam believes that partly as a result of the 1994 Desert Protection Act, Joshua Tree National Park was trying to promote the use of their own trails, as an ecosystem conservation method. Ewasko believes that Joshua Tree issued guidance to mapmakers to not mark (or de-prioritize marking) trails like the old Jeep roads, and to prioritize marking their official trails, some of which were faint and not well-indicated with signage.

Adam thinks Ewasko left the parking lot on the Jeep road – which, to be fair, runs mostly parallel to the official trail, and rejoins to it later. But he thinks that Ewasko, when returning, realized there was another parallel trail to the south and wanted to take a different route back, causing him to look for an intersection. However, Ewasko was already on the southern trail, and the unlabeled intersection he saw was to another trail that took him deeper into the wilderness – beginning the terrible spiral.

Think of this in terms of Type I and Type II errors. It’s obvious why putting a non-existent trail on a map could be dangerous: you wouldn’t want someone going to a place where they think there is a trail, because they could get lost trying to find it. It’s less obvious why not marking a trail that does exist could be dangerous, but it may well have been in this case, because it will lead people to make other navigational errors.

Endings

The search efforts did not, per se, “work”. Ewasko’s body was not found because of the search effort, but by backpackers who went off-trail to get a better view of the sunset. His body was on a hill, about seven miles northeast of his car, very close to the cellphone ping radius. He was a mile from a road.

In Adam’s final video, on Ewasko’s coroner’s report, Adam explaining that he doesn’t think he will ever learn anything else about Ewasko’s case. Like, that he could be wrong about what he thinks happened or someone may develop a better understanding of the facts, but there will be no new facts. Or at least, he doubts there will be. There’s just nothing left likely to be found.

There are worse endings, but “we have answered some of our questions but not all of them and I think we’ve learned all we are ever going to learn” has to be one of the saddest.

Like I said, I think the searchers made an incredible, thoughtful effort. Sometimes, you have a very hard problem and you can’t solve it. And you try very hard to figure out where you’re wrong and how and what’s going on and what you do is not good enough.

These reports remind me of the wealth of material available on airplane crashes, the root cause analyses done after the fact. Mostly, when people die in maybe-stupid and sad accidents, their deaths do not get detailed investigations, they do not get incident reviews, they do not get root cause analyses.

But it’s nice that sometimes they do.

If you go out into the wilderness, bring plenty of water. Maybe bring a friend. Carry a GPS unit or even a PLB if you might go into risky territory. Carry the 10 essentials. If you get lost, think really carefully before going even deeper into the wilderness and making yourself harder to find. And tell someone where you’re going.


Crossposted to: eukaryotewritesblog.com | Substack | LessWrong

Web-surfing tips for strange times

(h/t Bing’s copilot for the cover images, if you’re seeing them.)

Eukaryote Writes Blog is now syndicating to Substack. I have no plans for paygating content at the time, and new and old posts will continue to be available at EukaryoteWritesBlog.com. Call this an experiment and a reaching-out. If you’re reading this on Substack, hi! Thanks for joining me.

I really don’t like paygating. I feel like if I write something, hypothetically it is of benefit to someone somewhere out there, and why should I deny them the joys of reading it?

But like, I get it. You gotta eat and pay rent. I think I have a really starry-eyed view of what the internet sometimes is and what it still truly could be of a collaborative free information utopia.

But here’s the thing, a lot of people use Substack and I also like the thing where it really facilitates supporting writers with money. I have a lot of beef with aspects of the corporate world, some of it probably not particularly justified but some of it extremely justified, and mostly it comes down to who gets money for what. I really like an environment where people are volunteering to pay writers for things they like reading. Maybe Substack is the route to that free information web utopia. Also, I have to eat, and pay rent. So I figure I’ll give this a go.

Still, this decision made me realize I have some complicated feelings about the modern internet.

Hey, the internet is getting weird these days

Generative AI

Okay, so there’s generative AI, first of all. It’s lousy on Facebook and as text in websites and in image search results. It’s the next iteration of algorithmic horror and it’s only going to get weirder from here on out.

I was doing pretty well on not seeing generic AI-generated images in regular search results for a while, but now they’re cropping up, and sneaking (unmarked) onto extremely AI-averse platforms like Tumblr. It used to be that you could look up pictures of aspic that you could throw into GIMP with the aspect logos from Homestuck and you would call it “claspic”, which is actually a really good and not bad pun and all of your friends would go “why did you make this image”. And in this image search process you realize you also haven’t looked at a lot of pictures of aspic and it’s kind of visually different than jello, but now you see some of these are from Craiyon and are generated and you’re not sure which ones you’ve already looked past that are not truly photos of aspic and you’re not sure what’s real and you’re put off of your dumb pun by an increasingly demon-haunted world, not to mention aspic.

(Actually, I’ve never tried aspic before. Maybe I’ll see if I can get one of my friends to make a vegan aspic for my birthday party. I think it could be upsetting and also tasty and informative and that’s what I’m about, personally. Have you tried aspic? Tell me what you thought of it.)

Search engines

Speaking of search engines, search engines are worse. Results are worse. The podcast Search Engine (which also covers other topics) has a nice episode saying that this is because of the growing hoardes of SEO-gaming low-quality websites and discussing the history of these things, as well as discussing Google’s new LLM-generated results.

I don’t have much to add – I think there is a lot here, I just don’t know it – except that I believe most search engines are also becoming worse at finding strings of text put into quotation marks, and are more likely to search for the words in the text not-as-a-string. Bing was briefly the best that I’d seen of this, Google is the best now but I think all of them have gotten worse. What’s the deal with that?

Censorship

Hey, did you know Youtube flags and demotes videos that have the word “suicide” or “kill yourself”(/etc) in them? Many Youtube video makers get paid by Youtube for views on their videos, but if they’re in that setup, a video can also be “demonetized” meaning the maker doesn’t get paid for views. They can also be less likely to appear in search results – so it’s sort of a gray area between “just letting the content do whatever” and “deleting the content”. I don’t want to quite say that “you can’t say ‘suicide’ in new videos on Youtube”, but it equals out pretty close.

Tiktok has been on this for a while. I was never on Tiktok but it seems pretty rough over there. But Youtube is now on the same train. You don’t have to have the word “suicide” written down in the description or have a viewer flag the video or anything, youtube runs speech-to-text (presumably the same program that provides the automatic closed captions) and will detect if the word “suicide” is said, in the audio track.

Also, people are gonna talk about it. People making pretty sensitive videos or art pieces or just making edgy videos about real life still talk about it.

In fact, here are some of the ways Youtubers get around the way this topic is censored on the platform, which I have ranked from best to worse:

  1. Making sort of a pointing-gun-at-head motion with one’s fingers and pantomiming, while staring at the camera and pointing out the fact that you can’t say the word you mean – if it works for your delivery, it is a shockingly funny lampshade. Must be used sparingly.
  2. Taking their own life, ending themself, etc – Respectable but still grating if you pick up on the fact that they are avoiding the word “suicide”
  3. KYS and variations – Contaminated by somehow becoming an internet insult du jour but gains points for being directly short for the thing you want to say.
  4. Self-termination – Overly formal, not a thing anyone says.
  5. Unalived themselves – Unsalvageably goofy.
  6. Going down the sewer slide – Props for creativity; clear sign that we as a culture cannot be doing this.

So I know people who have attempted suicide and of the ones I have talked to about this phenomena, they fucking hate it. Being like “hey, this huge alienating traumatic experience in your life is actually so bad that we literally cannot allow you to talk about it” tends to be more alienating.

Some things are so big we have to talk about them. If we have to talk about them using the phrase “sewer slide”, I guess we will. But for christ’s sake, people are dying.

Survival tips

I’m reasonably online and I keep running into people who don’t know these. Maybe you’ll find something useful.

I was going to add in a whole thing about how “not all of this will apply to everyone,” but then I thought, why bother. Hey, rule one of taking advice from anyone or anything: sometimes it won’t apply to you! One day I will write the piece that applies to everyone, that enriches everyone’s life by providing them with perfectly new and relevant information. People will walk down the boulevards of the future thinking “hey, remember that one time we were all briefly united in a shining moment by the Ur-blog post that Georgia wrote a while ago.” It’s coming. Any day now. Watch this space.

USE MULTIPLE SEARCH ENGINES

Different web search engines are good at different things. This is surprisingly dynamic – I think a few years ago Bing was notable better at specific text (looking up specific quotes or phrases, in quotes. Good for finding the sources of things.)

I use DuckDuckGo day to day. For more complex queries or finding specific text, I switch to Google, and then if I’m looking for something more specific, I’ll also check Bing. I have heard fantastic things about the subscription search engine Kagi – they have a user-focused and not ad-focused search algorithm and also let you natively do things like just remove entire websites from search results.

 Marginalia is also a fantastic resource. It draws from more text-heavy sources and tends to find you older weirder websites and blogs, at the expense of relatedness.

There are other search engines for more specialized applications, e.g. Google Scholar for research papers.

If you ever use reverse image searches to find the source of images, I check in all of Google Images, Tineye, and Yandex before giving up. They all have somewhat different image banks.

USE FIREFOX AS YOUR BROWSER

Here’s a graph of the most common browsers over time.

According to statcounter, around 2012 Chrome became the most common browser, and in past few years well over 50% of internet usage is from Chrome.
Source: https://gs.statcounter.com

Chrome is a Google browser with Google’s tracking built into it, saving and sending information to Google as you hop around the web. Many of these features can be disabled, but also, the more people use exclusively Chrome, the more control Google can exert over the internet.

For instance, by majorly restricting what kind of browser extensions people can create and use, which is happening soon and is expected to nerf adblockers.

DO NOT GO GENTLE INTO THAT GOOD NIGHT. USE FIREFOX. HELL FUCKING YES.

Please stick it to the man and support a diverse internet ecosystem. Use Firefox. You can customize it in a million ways. It’s privacy focused. (Yes, privacy on the web is still achievable.) It’s run by a nonprofit. It’s really easy to use and works well. It’s for desktop and mobile. Use Firefox.

(I also have a Chrome-derived backup browser, Brave, on my PC for the odd website that is completely broken either by Firefox or by my many add-ons and I don’t want to troubleshoot it. I don’t use it often! Or when I want to use Google’s auto-translation tools, which are epic – and Google’s are better than what I’ve found conveniently on Firefox. You can have two browsers. Nobody can stop you. But make one of them Firefox.)

READ BLOGS? GET AN RSS READER

I’ve heard from a few savvy people that they like the convenience of Substack blogs for keeping track of updates, and I was like – wait, don’t you have an RSS reader? Google didn’t have a monopoly on the RSS reader! The RSS reader lives on!

What it is: A lot of internet content published serially – blog posts, but other things too – has an RSS feed, which is a way of tagging the content so you can feed it into a program that will link to updates automatically. An RSS reader is a program that stores a list of RSS feeds, and when you use it, it goes and checks for new additions to those feeds, and brings them back to you. It’ll keep track of which ones you’ve clicked on already and not show you them again.

This means you can keep track of many sources: Substacks, blogs on any other platform, podcasts, news outlets, webcomics, etc. Most good blogs are NOT on substack. That’s not a knock on substack, that’s just numbers. If substack is your only way of reading blogs you are missing out on vast swathes of the blogosphere.

I use Feedly, which has multi-device support, so I can have the same feed on both my phone and laptop.

If you want to run your own server for it, I hear good things about Tiny Tiny RSS.

There are a million more, and your options get wider if you only need to use it on one device. Look it up.

FIND SOME PEOPLE YOU TRUST.

If you find yourself looking up the same kinds of things a lot, look for experts, and go seek their opinion first.

This doesn’t have to only be for like hardcore research or current events or such. My role in my group house for the past some years has been “recluse who is pretty decent at home repairs”. Here is my secret: every time I run into a household problem I don’t immediately know how to solve, I aggressively look it up. 

In this example, Wikihow is a great ally. Things like Better Home and Gardens or Martha Stewart Living are also fairly known sources. If nothing else, I just try to look for something that was written by an expert and not a content mill or, god forbid, an LLM.

Sometimes your trusted source should be offline. There are definitely good recipe sites out there, but also if you really can’t stand the state of recipe search results, get a cookbook. I’m told experts write books on other subjects too. Investigate this. Report back to me.

PAY FOR THINGS YOU LIKE TO INCENTIVIZE THEIR EXISTENCE.

But if you have the money for the creators and resources of your favorite tools or stories or what have you, it’ll help it stay around. Your support won’t always be enough to save a project you love from being too much work for its creator to keep up with. But it’s gonna fucking help.

Hey –


If you don’t like Substack but want to support the blog, I am still on Patreon. But I kind of like what Substack’s made happen, and also many cool cats have made their way to it.

That said, here are some minor beefs with Substack as a host:

  1. I want to be able to customize my blog visually. There are very few options for doing this. The existing layout isn’t bad, and I’m sure it was carefully designed. And this gripe may sound trivial. But this is my site, and I think we lose something by homogenizing ourselves in a medium (internet) that is for looking. If I want to tank my readership by putting an obnoxious repeating grid of jpeg lobsters as my background, that’s my god-given right.

    (I do actually have plans to learn enough html to swap my WordPress site over to a self-hosted self-designed website, I just have to, like, get good enough with HTML and CSS and especially CSS to get Gwern’s nice sidenotes and hosting and how to do comments. It’s gonna happen, though. Any day now.)
  2. I don’t like that I can only put other substack publications in the “recommendations” sideroll. It feels insular and social-network-y and a lot of my favorite publications aren’t on substack. I’ll recommend you a few the manual way now:

For your experience of Eukaryote Writes Blog, I think the major theoretical downside of this syndication is splitting the comments section. Someone who sees the post on WordPress and leaves a comment there means that the person reading Substack won’t see it. What if there’s a good discussion somewhere?

But I already crosspost many of my posts to Lesswrong and usually if there’s any substantial conversation, it tends to happen there, not on the WordPress. Also sometimes my posts get posted on, like, Hacker News – which is awesome – and there are a bunch of comments there that I sometimes read when I happen to notice a post there but mostly I don’t. So this is just one more. I’ll see a comment for sure on LessWrong, Substack, or WordPress.

Anyway, glad to be here! Thanks for reading my stuff. Let me know if I get anything wrong. Download Firefox. On to more and better and stranger things.

Carl Sagan, nuking the moon, and not nuking the moon

In 1957, Nobel laureate microbiologist Joshua Lederberg and biostatician J. B. S. Haldane sat down together imagined what would happened if the USSR decided to explode a nuclear weapon on the moon.

The Cold War was on, Sputnik had recently been launched, and the 40th anniversary of the Bolshevik Revolution was coming up – a good time for an awe-inspiring political statement. Maybe they read a recent United Press article about the rumored USSR plans. Nuking the moon would make a powerful political statement on earth, but the radiation and disruption could permanently harm scientific research on the moon.

What Lederberg and Haldane did not know was that they were onto something – by the next year, the USSR really investigated the possibility of dropping a nuke on the moon. They called it “Project E-4,” one of a series of possible lunar missions.

What Lederberg and Haldane definitely did not know was that that same next year, 1958, the US would also study the idea of nuking the moon. They called it “Project A119” and the Air Force commissioned research on it from Leonard Reiffel, a regular military collaborator and physicist at the University of Illinois. He worked with several other scientists, including a University of Chicago grad student named Carl Sagan.

“Why would anyone think it was a good idea to nuke the moon?”

That’s a great question. Most of us go about our lives comforted by the thought “I would never drop a nuclear weapon on the moon.” The truth is that given a lot of power, a nuclear weapon, and a lot of extremely specific circumstances, we too might find ourselves thinking “I should nuke the moon.”

Reasons to nuke the moon

During the Cold War, dropping a nuclear weapon on the moon would show that you had the rocketry needed to aim a nuclear weapon precisely at long distances. It would show off your spacefaring capability. A visible show could reassure your own side and frighten your enemies.

It could do the same things for public opinion that putting a man on the moon ultimately did. But it’s easier and cheaper:

  • As of the dawn of ICBMs you already have long-distance rockets designed to hold nuclear weapons
  • Nuclear weapons do not require “breathable atmosphere” or “water”
  • You do not have to bring the nuclear weapon safely back from the moon.

There’s not a lot of English-language information online about the USSR E-4 program to nuke the moon. The main reason they cite is wanting to prove that USSR rockets could hit the moon.3 The nuclear weapon attached wasn’t even the main point! That explosion would just be the convenient visual proof.

They probably had more reasons, or at least more nuance to that one reason – again, there’s not a lot of information accessible to me.* We have more information on the US plan, which was declassified in 1990, and probably some of the motivations for the US plan were also considered by the USSR for theirs.

  • Military
  • Scare USSR
  • Demonstrate nuclear deterrent1
    • Results would be educational for doing space warfare in the future2
  • Political
    • Reassure US people of US space capabilities (which were in doubt after the USSR launched Sputnik)
      • More specifically, that we have a nuclear deterrent1
    • “A demonstration of advanced technological capability”2
  • Scientific (they were going to send up batteries of instruments somewhat before the nuking, stationed at distances from the nuke site)
    • Determine thermal conductivity from measuring rate of cooling (post-nuking) (especially of below-dust moon material)
    • Understand moon seismology better via via seismograph-type readings from various points at distance from the explosion
      • And especially get some sense of the physical properties of the core of the moon2
MANY PROBLEMS, ONE SOLUTION: BLOW UP THE MOON
As stated by this now-unavailable A Softer World merch shirt design. Hey, Joey Comeau and Emily Horne, if you read this, bring back this t-shirt! I will buy it.

Reasons to not nuke the moon

In the USSR, Aleksandr Zheleznyakov, a Russian rocket engineer, explained some reasons the USSR did not go forward with their project:

  • Nuke might miss the moon
    • and fall back to earth, where it would detonate, because of the planned design which would explode upon impact
      • in the USSR
      • in the non-USSR (causing international incident)
    • and circle sadly around the sun forever
  • You would have to tell foreign observatories to watch the moon at a specific time and place
    • And… they didn’t know how to diplomatically do that? Or how to contact them?

The US has less information. While they were not necessarily using the same sea-mine style detonation system that the planned USSR moon-nuke would have3, they were still concerned about a failed launch resulting in not just a loose rocket but a loose nuclear weapon crashing to earth.2

(I mean, not that that’s never happened before.)

Even in his commissioned report exploring the feasibility, Leonard Reiffel and his team clearly did not want to nuke the moon. They outline several reasons this would be bad news for science:

  • Environmental disturbances
  • Permanently disrupting possible organisms and ecosystems
    • In maybe the strongest language in the piece, they describe this as “an unparalleled scientific disaster”
  • Radiological contamination
    • There are some interesting things to be done with detecting subtle moon radiation – effects of cosmic rays hitting it, detecting a magnetosphere, various things like the age of the moon. Nuking the moon would easily spread radiation all over it. It wouldn’t ruin our ability to study this, especially if we had some baseline instrument readings up there first, but it wouldn’t help either.
  • To achieve the scientific objective of understanding moon seismology, we could also just put detectors on the moon and wait. If we needed more force, we could just hit the moon with rockets, or wait for meteor impacts.

I would also like to posit that nuking the moon is kind of an “are we the baddies?” moment, and maybe someone realized that somewhere in there.

Please don't do that :(

Afterwards

That afternoon they imagined the USSR nuking the moon, Lederberg and Haldane ran the numbers and guessed that a nuclear explosion on the moon would be visible from earth. So the USSR’s incentive was there. They couldn’t do much about that but they figured this would be politically feasible, and that this was frightening because such a contamination would disrupt and scatter debris all over the unexplored surface of the moon – the closest and richest site for space research, a whole mini-planet of celestial material that had not passed through the destructive gauntlet of earth’s atmosphere (as meteors do, the force of reentry blasting away temperature-sensitive and delicate structures).

Lederberg couldn’t stop the USSR from nuking the moon. But early in the space age, he began lobbying for avoiding contaminating outer space. He pushed for a research-based approach and international cooperation, back when cooperating with the USSR was not generally on the table. His interest and scientific clout lead colleagues to take this seriously. We still do this – we still sanitize outgoing spacecraft so that hardy Earth organisms will (hopefully) not colonize other planets.

A rocket taking earth organisms into outer space is forward contamination.

Lederberg then took some further steps and realized that if there was a chance Earth organisms could disrupt or colonize Moon life, there was a smaller but deadlier chance that Moon organisms could disrupt or colonize Earth life.

A rocket carrying alien organisms from other planets to earth is back contamination.

He realized that in returning space material to earth, we should proceed very, very cautiously until we can prove that it is lifeless. His efforts were instrumental in causing the Apollo program to have an extensive biosecurity and contamination-reduction program. That program is its own absolutely fascinating story.

Early on, a promising young astrophysicist joined Lederberg in A) pioneering the field of astrobiology and B) raising awareness of space contamination – former A119 contributor and future space advocate Carl Sagan.

Here’s what I think happened: a PhD student fascinated with space works on secret project that he’d worked on with his PhD advisor on nuking the moon. He assists with this work, finding it plausible, and is horrified for the future of space research. Stumbling out of this secret program, he learns about a renowned scientist (Joshua Lederberg) calling loudly for care in space contamination.

Sagan perhaps learns, upon further interactions, that Lederberg came to this fear after considering the idea that our enemies would detonate a nuclear bomb on the moon as a political show.

Why, yes, Sagan thinks. What if someone were foolish enough to detonate a nuclear bomb on the moon? What absolute madmen would do that? Imagine that. Well, it would be terrible for space research. Let’s try and stop anybody from ever doing that that.

A panel from Homestuck of Dave blasting off into space on a jetpack, with Carl Sagan's face imposed over it. Captioned "THIS IS STUPID"
Artist’s rendition. || Apologies to, inexplicably, both Homestuck and Carl Sagan.

And if it helps, he made it! Over fifty years later and nobody thinks about nuking the moon very often anymore. Good job, Sagan.

This is just speculation. But I think it’s plausible.

If you like my work and want to help me out, consider checking out my Patreon! Thanks.

References

* We have, like, the personal website of a USSR rocket scientist – reference 3 below – which is pretty good.

But then we also have an interview that might have been done by journalist Adam Tanner with Russian rocket scientist Boris Chertok, and published by Reuters in 1999. I found this on an archived page from the Independent Online, a paper that syndicated with Reuters, where it was uploaded in 2012. I emailed Reuters and they did not have the interview in their archives, but they did have a photograph taken of Chertok from that day, so I’m wondering if they published the article but simply didn’t properly archive it later, and if the Independent Online is the syndicated publication that digitized this piece. (And then later deleted it, since only the Internet Archived copy exists now.) I sent a message to who I believe is the same Adam Tanner who would have done this interviewee, but haven’t gotten a response. If you have any way of verifying this piece, please reach out.

1: Associated Press, as found in the LA Times Archive, “U.S. Weighed A-Blast on Moon in 1950s.” 2008 May 18. https://www.latimes.com/archives/la-xpm-2000-may-18-mn-31395-story.html

2. Project A119, “A Study of Lunar Research Flights”, 1959 June 15. Declassified report: https://archive.org/details/DTIC_AD0425380

This is an extraordinary piece to read. I don’t think I’ve ever read a report where a scientist so earnestly explores a proposal and tries to solve various technical questions around it, and clearly does not want the proposal to go forward. For instance:

It is not certain how much seismic energy will be coupled into the
moon by an explosion near its surface,
hence one may develop an argument
that a large explosion would help ensure success of a first seismic experiment. On the other hand, if one wished to proceed at a more leisurely pace, seismographs could be emplaced upon the moon and the nature of possible interferences determined before selection of the explosive device. Such a course would appear to be the obvious one to pursue from a purely scientific viewpoint.

3. Aleksandr Zheleznyakov, translated by Sven Grahm, updated 1999 or so. “The E-4 project – exploding a nuclear bomb on the Moon.” http://www.svengrahn.pp.se/histind/E3/E3orig.htm

Crossposted to LessWrong.

Internet Harvest (2024, 1)

Internet Harvest is a selection of the most succulent links on the internet that I’ve recently plucked from its fruitful boughs. Feel free to discuss the links in the comments.

Biosecurity

US COVID and flu website + hotline for getting prescribed paxlovid, for free, for anyone with a positive COVID test risk factors.

Register now to access free virtual care and treatment for COVID-19 and Flu, 24 hours a day, 7 days a week. Sign up anytime, whether you are sick or not.

https://www.test2treat.org/

I unfortunately had cause to use this recently, and I was struck by how easy it was – as well as the fact that I did not have to talk to anyone via phone or video call.

(It was an option, and they indicated at a couple points that a medical professional might call me if they had questions, so you should be prepared, but in my case they didn’t.) The whole thing including getting the prescription was handled over text digitally. This is fantastic.

First fatal case of alaskapox, a novel orthopox virus, in an immunocompromised patient. Orthopox is the virus group that includes smallpox, monkeypox, and cowpox. Orthopox was discovered in 2015 and seems to be spread by rodents. There have been seven total human cases so far.

The University of Minnesota Center for Infectious Disease Research and Policy (CIDRAP) has opened a Chronic Wasting Disease (CWD) Contingency Planning Project – a group of experts that are planning for the possibility of CWD spilling over into people.

I think this is a deeply important kind of project to be doing. You see this in some places in some fashion – for instance, there’s a lot of effort and money spent understanding and tracking and controlling avian influenza (a strain which has a high mortality in humans but isn’t infectious between humans, just birds – for now.) But often, this kind of proactive pandemic prevention work isn’t done, even when the evidence is there.

(I wrote about the possibility of chronic wasting disease spilling into humans a few months ago. I ended up supposing that it was possible, but looking at the infection risk posed by another spilled-over prion disease from a more common animal, BSE, it seems like the absolute risk from prion diseases that can infect humans is extremely low. I don’t think people on this project would necessarily disagree with that, but there are a lot of unknowns and plenty of reasons to take even a low risk of a highly lethal disease spillover very cautiously. Still, I’ll have to read up on it, there may well be a higher risk than I assumed.)

Related, first noticed cases of Alzheimer’s disease transmitted between people (in patients injected with human-derived human growth hormone, decades later). In a past prion post, I wrote: “Meanwhile, Alzheimer’s disease might be slightly infectious- if you take brain extracts from people who died of Alzheimer’s, and inject them into monkey’s brains, the monkeys develop spongy brain tissue that suggests that the prions are replicating. This technically suggests that the Alzheimer’s amyloids are infectious, even if that would never happen in nature.” Well, it didn’t happen naturally, but I guess it did happen. (h/t Scott at Astral Codex Ten.)

The design history of the biohazard symbol.

Other biology

“Obelisks” are potentially a completely new kind of tiny microorganism, identified from metagenomic RNA sequencing.

One of the best stories in a scientific paper is Ants trapped for years in an old bunker; survival by cannibalism and eventual escape” by Rutkowski et al, 2016.

First of all: the discovery of ants falling into a pipe that lead to a sealed bunker in Poland. Once inside, the ants couldn’t climb back out. There were no plants or other life in the bunker, so the ants survived on other organisms that that fell into the bunker, including eating their own dead (which they wouldn’t normally do, but if you’re in a tight spot, like an unused former nuclear weapons bunker, calories is calories.)

Second: after studying how they lived, the scientists tried transporting a small group of Bunker Ants to the surface to make sure they wouldn’t immediately behave in some kind of abnormal destructive way toward surface ants.

Then, when they didn’t, the scientists – in what I see as a breathtaking act of compassion – installed a plank into the pipe, so that the Bunker Ants could climb out of the bunker and be on the surface again.

Flash photo taken in a small groaty bunker room. In the middle is two new planks nailed together to make a bridge extending from the dirty floor of the bunker to a hole in the ceiling.
God shows up and apologizes for not noticing us sooner and says she’ll have the angels install one of these in the sky. [Image: Rutkowski et al 2019]

NEW DEEP SEA ANIMALS LOCATED. Listen. I’ve written about this before – I know so much about the weird little women of the deep ocean and still every time I learn some more there are NEW STRANGER WOMEN DOWN THERE. This is also true of all of humanity learning about the deep ocean I guess. You simply cannot have a beat on this place.

Other bad things

A Hong Kong finance worker joined a multi-person video call where all his colleagues’ videos and their voices were deepfakes. It was a scam and the worker was tricked into transferring them millions of dollars. So that’s a thing that can happen now!

Bellingcat’s investigation into a tugboat spilling oil off the coast of Tobago. (The first piece is linked, there have been more updates since.) I love Bellingcat; I’ve talked about this before. My reaction to keeping up with this series is equal parts “those wizards have done it again” and “there be some specific-ass websites in this world.”

…But when there’s not detailed public-facing information about something, you can make your own, as shown by volunteers tracking ICE deportations by setting up CCTV facing Boeing Field in Seattle and showing up weekly to watch the feed and count how many chained detainees are boarded onto planes. This is laudable dedication.

Do you have an off-brand video doorbell? Get rid of it! They’re incredibly insecure. They aren’t even encrypted. If you have an on-brand video doorbell, maybe still get rid of it, or at least switch it to using local storage. At least, Ring has been ending the thing where they make it easy for police to get footage without warrants, but there are other brands that might have different systems and if you ask me it’s pretty bad that they had that in the first place. (H/t Schneier on Security)

You know who is giving police sensitive customer personal information without warrants? Pharmacies! Yikes! (Again, h/t Schneier on Security)

Other interesting things

Mohists as early effective altruists? Ozy at Thing of Things writes about the Mohist philosophy of ancient China. I knew a little bit about this, but learning more it’s even cooler than I thought, and the parallels to modern rationality are surprising.

MyHouse.WAD is a fan-created map for the video game “Doom” that turns promises to be a map of a childhood home and turns into an evocative horror experience. I don’t know anything about Doom, so I’ve only experienced it in the form of youtube videos about the (real! playable!) map. You also don’t need to know anything about Doom to appreciate it. Power Pak’s video “MyHouse.WAD – Inside Doom’s Most Terrifying Mod” is the most popular video and for good reason.

If you liked that, you may also enjoy Spazmatic Banana’s “doom nerd blindly experiences myhouse.wad (and loses his mind)” (exactly what it sounds like) as well as DavidXNewton’s “The Machniations of myhouse.wad (How it works)” series (which, as it sounds like, explains how the map works. Again, well explained if you do not know a thing about Doom modding.)

The earliest ARG began in the 1980s at the very beginning of the internet age and is based around a supposed research project that made a dimensional rift in the (real) ghost town of Ong’s Hat, New Jersey.

The world’s largest terrestrial vehicle is the Bagger 293, an otherworldly-looking machine that scrapes up topsoil for digging mines.

A colossal bucket-wheel excavator device. It looks kind of like a big shipping crane with a circular sawblade made of excavator buckets all frankensteined together. Some tiny people indicate the scale.
There’s debate to be had on the virtues or lack thereof of open pit mining, but I think we can all agree: they made a really big machine about it.

A cool piece on the woman who won the “Red Lantern” award for coming in last in the 2022 Iditarod. (H/t Briar.)

Hilariously, she wrote on Twitter:

A short story: The Mother of All Squid Builds a Library. (H/t Ozy.)

Kelsey Piper’s piece on regulations and why it’s good that the FAA lets parents on airplanes carry babies in their lap, even though this is known to be less safe in the event of plane accidents than requiring babies to have their own seats.

A 1500s illustration of three Aztec people with fancy food dishes in front of them.

Book review: Cuisine and Empire

[Header: Illustration of meal in 1500s Mexico from the Florentine Codex.]

People began cooking our food maybe two million years ago and have not stopped since. Cooking is almost a cultural universal. Bits of raw fruit or leaves or flesh are a rare occasional treat or garnish – we prefer our meals transformed. There are other millennia-old procedures we do to make raw ingredients into cooking: separating parts, drying, soaking, slicing, grinding, freezing, fermenting. We do all of this for good reason: Cooking makes food more calorically efficient and less dangerous. Other techniques contribute to this, or help preserve food over time. Also, it tastes good.

Cuisine and Empire by Rachel Laudan is an overview of human history by major cuisines – the kind of things people cooked and ate. It is not trying to be a history of cultures, agriculture, or nutrition, although it touches on all of these things incidentally, as well as some histories of things you might not expect, like identity and technology and philosophy.

Grains (plant seeds) and roots were the staples of most cuisines. They’re relatively calorically dense, storeable, and grow within a season.

  • Remote islands really had to make do with whatever early colonists brought with them. Not only did pre-Columbian Hawaii not have metal, they didn’t have clay to make pots with! They cooked stuff in pits.

Running in the background throughout a lot of this is the clock of domestication – with enough time and enough breeding you can make some really naturally-digestible varieties out of something you’d initially have to process to within an inch of its life. It takes time, quantity, and ideally knowledge and the ability to experiment with different strains to get better breeds.

Potatoes came out of the Andes and were eaten alongside quinoa. Early potato cuisines didn’t seem to eat a lot of whole or cut-up potatoes – they processed the shit out of them, chopping, drying or freeze-drying them, soaking them, reconstituting them. They had to do a lot of these because the potatoes weren’t as consumer-friendly as modern breeds – less digestible composition, more phytotoxins, etc.

As cities and societies caught on, so did wealth. Wealthy people all around the world started making “high cuisines” of highly-processed, calorically dense, tasty, rare, and fancifully prepared ingredients. Meat and oil and sweeteners and spices and alcohol and sauces. Palace cooks came together and developed elaborate philosophical and nutritional theories to declare what was good to eat.

Things people nigh-universally like to eat:

  • Salt
  • Fat
  • Sugar
  • Starch
  • Sauces
  • Finely-ground or processed things
  • A variety of flavors, textures, options, etc
  • Meat
  • Drugs
    • Alcohol
    • Stimulants (chocolate, caffeine, tea, etc)
  • Things they believe are healthy
  • Things they believe are high-class
  • Pure or uncontaminated things (both morally and from, like, lead)

All people like these things, and low cuisines were not devoid of joy, but these properties showed up way more in high cuisines than low cuisines. Low cuisines tended to be a lot of grain or tubers and bits of whatever cooked or pickled vegetables or meat (often wild-caught, like fish or game) could be scrounged up.

In the classic way that oppressive social structures become self-reinforcing, rich people generally thought that rich people were better-off eating this kind of diet – carefully balanced – whereas it wasn’t just necessary, it was good for the poor to eat meager, boring foods. They were physically built for that. Eating a wealthy diet would harm them.

In lots of early civilizations, food and sacrifice of food was an important part of religion. Gods were attracted by offered meals or meat and good smells, and blessed harvests. There were gods of bread and corn and rice.

One thing I appreciate about this book is that it doesn’t just care about the intricate high cuisines, even if they were doing the most cooking, the most philosophizing about cooking, and the most recordkeeping. Laudan does her best to pay at least as much attention to what the 90+% of regular people were eating all of the time.


Here’s a great passage on feasts in Ancient Greece, at the Temple of Zeus in Olympia, at the start of each Olympic games (~400 BCE):

On the altar, ash from years of sacrifice, held together with water from the nearby River Alpheus, towered twenty feet into the air. One by one, a hundred oxen, draped with garlands, raised especially for the event and without marks of the plow, were led to the altar. The priest washed his hands in clear water in special metal vessels, poured out libations of wide, and sprinkled the animals with cold water or with grain to make them shake their heads as if consenting to their death. The onlookers raised their right arms to the altar. Than the priest stunned the lead ox with a blow to the base to the neck, thrust in the knife, and let the blood spill into a bowl held by a second priest. The killing would have gone on all day, even if each act took only five minutes.

Assistants dragged each felled ox to one side to be skinned and butchered. For the assembled crowd, cooks began grilling strips of beef, boiling bones in cauldron, baking barley bannocks, and stacking up amphorae of wine. For the sacrifice, fat and leg and thigh bones rich in life-giving marrow were thrown on a fire of fragrant poplar branches, and the entrails were grilled. Symbolizing union, two or three priests bit together into each length of intestines. The bones whitened and crumbled; the fragrant smoke rose to the god.

Ancient Greek farmers had thin soil and couldn’t do much in the way of deliberate irrigation, so their food supply was more unpredictable than other places.

Country people kept a three-year supply of grain to protect against harvest failure and a four-year supply of oil. 

That’s so much!

That poor soil is also why the olive tree was relied on for oil instead of grains, which had better yields and took way less time to reach producing age. You could grow olive trees in places you couldn’t farm grain. And now we all know and love the oil from this tree. A tree is a wild place to get oil from! Similar story for grapevines.

  • The Spartans really liked this specific pork and blood soup called “black broth”.

This book was a fun read, on top of the cool history. Laudan has a straightforward listful way of describing cuisines that really puts me in the mind of a Redwall or a George R. R. Martin feast description.

A royal meal in the Indian Mauryan Empire (circa 300 BCE or so):

For court meals, the meat was tempered with spices and condiments to correct its hot, dry nature and accompanied by the sauces of high cuisine. Buffalo calf spit-roasted over charcoal and basted with ghee was served with sour tamarind and pomegranate sauces. Haunch of venison was simmered with sour mango and pungent and aromatic spices. Buffalo calf steaks were fried in ghee and seasoned with sour fruit, rock salt, and fragrant leaves. Meat was ground, formed into patties, balls, or sausage shapes, and fried, or it was sliced, dried to jerky, and then toasted.

Or in around 600 CE, Mexican Teotihuacan eating:

To maize tamales or tortillas were added stews of domestic turkeys and dogs, and deer, rabbits, ducks, small birds, iguanas, fish, frog, and insects caught in the wild. Sauces were made with basalt pestles and mortars that were used to shear fresh green or dried and rehydrated red chiles, resulting in a vegetable puree that was thickened with tomatillos (Physalis philadelphica) or squash seeds. Beans, simply simmered in water, provided a tasty side dish. For the nobles, there were gourd bowls of foaming chocolate, seasoned with annatto and chili.

I’m a vegetarian who has no palette for spice and now all I can think about is eating dog stew made with sheared fresh green chiles and plain beans.

Be careful about reading this book while broke on an airplane. You will try to convince yourself this is all academic and that you’re not that curious about what iguana meat tastes like. You’ll lose that internal battle. Then, in desperation, your brain will start in on a new phase. You’ll tell yourself as you scrape the last of your bag of traveler’s food – walnut meat, dried grapes, and pieces of sweet chocolate – that you wait to be brought a complimentary snack of baked wheat crackers flavored with salt, and a cup of hot coffee with cow’s milk, sweetened with cane sugar, and also that this is happening while you are flying. In this moment, you will be enlightened.


Grindstones are very important throughout history. A lot of cultures used hand grindstones at first and worked into water or animal-driven mills later. You grind grain to get flour, but you also grind things to get oil, spices, a different consistency of root, etc. You spent a lot of time grinding grain. There are a million kinds of hand grindstone. Some are still used today. When Roman soldiers marched around continents, they brought with them a relatively efficient rotary grindstone. They used mules to carry one 60-pound grindstone per 8 people. Every day, a soldier would grind for an hour and a half to feed the eight people. The grain would be stolen from storehouses conquered along the way.


Chapter 3 on Buddhist cuisines throughout Asia was especially great. Buddhism spread as sort of a reaction to the high sacrificial meat-n-grain cuisine of the time – a religious asceticism that really caught on. Ashoka spread it in India in 250 BCE, and slowly over centuries seeped into China. Buddhists did not kill animals (mostly) nor drink alcohol, and ate a lot of rice. White rice, sugar, and dairy spread through Asia. In both China and India, as the rich got into it, Buddhism became its own new high cuisine: rare vegetables, sugar, ghee and other dairy, tea, and elaborate vegetarian dishes. So much for asceticism!

There is an extensive history of East Asian tofu and gluten-based meat substitutes that largely came out of vegetarian Buddhist influence. A couple 1100s and 1200s CE Chinese cookbooks are purely vegetarian and have recipes for things like mock lung (you know, like a mock hamburger or mock chicken, but if you’re missing the taste of lung.) (You might be interested in modern adaptations from Robban Toleno.)

Diets often go with religion. It’s a classic way to divide culture, and also, food and philosophy and ideas about health have always gone hand in hand in hand. Islamic empires spread cuisine over the middle east. Christian empires brought their own food with them to other parts of the world.

A lot of early cuisines in Europe, the Middle East, India, Asia, and Mesoamerica were based on correspondences between types of food and elements and metaphysical ideas. You would try to reach balance. In Europe in the 1500s, during the Enlightenment, these old incorrect ideas about nutrition were replaced with bold new incorrect ideas about nutrition. Instead of corresponding to four elements, food was actually made of three chemical elements: salt, oil, and vapor. The Dutch visionary Paracelsus who thought chemistry could be based on the bible and was a century later called a “master at murdering folk with chemistry”.

Fermenting took on its own magic:

Paracelsus suggested that “ferment” was spiritual, reinterpreting the links between the divine and bread in terms of his Protestant chemistry. When ferment combined with matter (massa in Latin, significantly also the word for bread dough), it multiplied. If this seems abstract, considered what happened in bread making. Bakers used a ferment or leaven[…] and kneaded it with flour and water. A few hours later, the risen dough was full of bubbles, or spirit. Ferment, close to the soul itself, turned lifeless stuff into vibrant, living bodies filled with spirit. The supreme example of ferment was Christ, described by the chemical physicians as fermentum, “the food of the soul.”

Again, cannot stress enough that the details of this food cosmology still got most things wrong. But I think they weren’t far off with this one.

There was an article I had bookmarked years ago about the very early days of microbiology and how many people interpreted this idea of tiny animalcules found in sexual fluid and sperm as literal demons. Does anyone know about this? I feel like these dovetail very nicely in a history of microbiological theology.


Corn really caught on in the 1800s as a food for the poor in East and Central Africa, Italy, Japan, India, and China. I don’t really know how this happened. I assume it grew better in some climates than native grains, like potatoes did in Europe?

Corn cuisine in the Americas knew to treat the corn with lye to release more of its nutrients, kill toxins, and make it taste better. This is called nixtamalization. When corn spread to Eurasia, it was grown widely, but nixtamalization didn’t make it over. The Eurasian eaters had to get those nutrients from elsewhere. They still ate corn, but it was a worse time!

  • In Iceland, where no crops would grow, people would use dried fish called “stockfish” and spread sheep butter on it and eat it instead of bread.

Caloric efficiency was a fun recurring theme. See again, the slow adoption of the potato into Europe. Cuisine has never been about maximizing efficiency. Once bare survival is assured, people want to eat what they know and what has high status in their minds.

I think this is a statement about the feedback cycles of individual people, for instance, subsistence farmers. Suppose you’re a Polish peasant in 1700 and you struggle by year by year growing wheat and rye. But this year you have access to potatoes, a food you somewhat mistrust. You might trust it enough to eat a cooked potato handed to you if you were starving – but when you make decisions about what to plant for a year, you will be reluctant to commit to you and your family to a diet of a possibly-poisonous food (or to a failed crop – you don’t know growing potatoes either). Even if it’s looking like a dry year – especially if it’s looking like a dry year! – you know wheat and rye. You trust wheat and rye. You’ve made it through a lean year of wheat and rye before. You’ll do it again.

People are reluctant to give up their staple crops but they will supplant them. Barley was solidly replaced by the somewhat-more-efficient wheat throughout Europe, millet by rice and wheat in China. But we settled on some ones we like: 

The staples that humans had picked out centuries before 1000 B.C.E. still provide most of the world’s human food calories. Only sugarcane, in the form of sugar, was to join them as a major food source.

Around 1650 in Europe, protestant-derived French cuisine overtook high Catholic cuisine as the main food of the European aristocracy.

Catholic cuisineFrench cuisine
Roasts
Fancy pies
Pottage
Cold foods are bad for you
Fasting dishes
Lard
Pastry
Fancy sauces
Boullions and extracts
Raw salads
Focus on vegetables
Butter

Slowly coming up in more recent times, say the 1700s, was a very slow equalizing in society: 

As more nations followed the Dutch and British in locating the source of rulers’ legitimacy not in hereditary or divine rights but in some form of consent or expression of the will of the people, it became increasingly difficult to deny to all citizens the right to eat the same kind of food.

After the French revolution, high French cuisine was almost canceled in France. Everyone should eat as equals, even if the food was potatoes! Fortunately Unfortunately As it happened, Napoleon came in after not too long and imperial high cuisine was back on a very small number of menus.

Speaking of potatoes and self-governance:

The only place where potatoes were adopted with enthusiasm was in distant [from Europe] New Zealand. The Maoris, accustomed to the subtropical roots that they had introduced to the North Island, welcomed them when introduced by Europeans in the 1770s because they grew in the colder South Island. Trading potatoes for muskets with European whalers and sealers enabled the Maoris to resist the British army from the 1840s to the 1870s.

Meanwhile, in Europe: Hey, we’re back to meat and grain! Britain really prized itself on beef and attributed the strength of its empire to beef. Even colonized peoples were like “whoa, maybe that beef and bread they’re eating really is making them that strong, we should try that.” Here’s a 1900 ad for beef extract that aged poorly:

[Source of this version. The brand of beef extract is spelled out of British colonies.]

That said, I did enjoy Laudan’s defense of British food. Starting in 1800, the British Empire was well underway, and what we now think of as stereotypical British cuisine was developing. It was heavy in sugar and sweets, white bread, beef, and prepared food. During the early industrial revolution, food and nutrition and the standard of living went down, but by the 1850s, all of it really came back.

It is worth noting that few cuisines have been so roundly condemned as nutritional and gastronomical disasters as British cuisine.

But Laudan points out that this food was not the aristocrat food (they were still eating French cuisine). It was the food of the working city poor. This is the rise of the “middling cuisines”, a true alternative between fancy high cuisine of a truly tiny percent of society and humble cuisine of peasants who often faced starvation. For once, they had enough to eat. This was new.

After discussing the various ways in which the diet may have been bland or unappealing compared to neighboring cuisines –

Nonetheless, from the perspective of the urban salaried and working classes, the cuisine was just what they had wished for over the centuries: white bread, white sugar, meat, and tea. A century earlier, not only were these luxuries for much of the British population, but the humble were being encouraged to depend on potatoes, not bread, a real comedown in a society in which some kind of bread, albeit a coarse one, had been central to well-being for centuries. Now all could enjoy foodstuffs that had been the privilege of the aristocracy just a few generations earlier. Indeed, the meal called tea came close to being a true national cuisine. Even though tea retained traces of class distinctions, with snobberies about how teacups should be held, or whether milk or tea should be put into the cup first, everyone in the country, from the royal family, who were painted taking tea, to the family of a textile worker in the industrial north of the country, could sit down to white bread sandwiches or toast, jam, small cakes, and an iced sponge cake as a centerpiece. They could afford the tea that accompanied the meal. Set out on the table, tea echoed the grand buffets of eighteenth-century French high cuisine. [...] What seemed like culinary decline to those Britons who had always dined on high or bourgeois cuisine was a vast improvement to those enjoying those ampler and more varied cuisines for the first time.

[...]

Although to this day food continues to be used to reinforce minor differences in status, the hierarchical culinary philosophy of ancient and traditional cuisines was giving way to the more egalitarian culinary philosophy of modern cuisines.

A lot of this was facilitated by imperialism and/or outright slavery. The tea itself, for instance. But Britain was also deeply industrialized. Increased crop productivity, urbanization, and industrial processing were also making Britain’s home-grown food – wheat, meat – cheaper too. Or bringing these processes home. At the start of this period, sugar had been grown and harvested by slaves to feed Europe’s appetites, but in 1800, Prussian inventors figured out how to make sugar at scale from beets. 

The work was done by men paid salaries or wages, not by slaves or indentured laborers. The sugar was produced in northern Europe, not in tropical colonies. And the price was one all Europeans could afford. 

This was the sugar the British were eating then. Industrialization offered factory production of foods, canning, wildly cheap salt, and refrigeration.

We’re reaching the modern age, where the empires have shrunk and most people get enough calories and have access to industrially-cheap food and the fruits of global trade. Laudan discusses at length the hamburger and instant ramen – wheat flour, fat, meat or meat flavor, low price, and convenience. New theories of nutrition developed and we definitely got them right this time. The empires break up and worldwide leaders take pride in local cuisines, manufacturing a sense of identity through food if needed. Most people have the option of some dietary diversity and a middling cuisine. Go back to that list of things people like to eat. Most of us have that now! Nice!

  • Nigeria is the biggest importer of Norwegian stockfish. It caught on as a relief food delivered during Nigeria’s Biafran civil war. Here’s a 1960s photo of a Nigerian guy posing in a Bergen stockfish warehouse.

Aw, wait, is this a book review? Book review: Great stuff. There’s a lot of fascinating stuff not included in this summary. I wish it had more on Africa but I did like all the stuff about Eurasia that was in there. I feel like there are a few cultures with really really meat heavy cuisines – like Saami or Inuit cuisine – that could have been at least touched on. But also those aren’t like major cuisines and I can just learn about those on my own. Overall I appreciated the unwavering sense of compassion and evenhandedness – discussing cuisines and falsified theories of nutrition without casting judgment. Everyone’s just trying to eat dinner.

Rachel Laudan also has a blog. It looks really cool.

Cuisine and Empire by Rachel Laudan

The book is “Cuisine and Empire” by Rachel Laudan, 2012. h/t my friend A for the recommendation.


More food history from Eukaryote Writes Blog: Triptych in Global Agriculture.

If you want to support my work by chucking me a few bucks per post, check out my Patreon!

Defending against hypothetical moon life during Apollo 11

[Header image: Photo of the lunar lander taken during Apollo 11.]

In 1969, after successfully bringing men back from landing on the moon, the astronauts, spacecraft, and all the samples from the moon surface were quarantined for 21 days. This was to account for the possibility that they were carrying hostile moon germs. Once the quarantine was up and the astronauts were not sick, and extensive biological testing on them and the samples showed no signs of infection or unexpected life, the astronauts were released.

We know now that the moon is sterile. We didn’t always know this. That was one of the things we hoped to find out from the Apollo 11 program, which was the first time not only that people would visit another celestial body, but that material from another celestial body would be brought back in a relatively pristine fashion to earth. The possibilities were huge.

The possibilities included life, although nobody thought this was especially likely. But in that slim chance of life, there was a chance that life would be harmful to humans or the earth environment. Human history is full of organisms wrecking havoc when introduced to a new location – smallpox in the Americas, rats in Pacific Islands, water hyacinth outside of South America. What if there were microbes on the moon? Even if there was a tiny chance, wouldn’t it be worth taking careful measures to avoid the risk of an unknown and irreversible change to the biosphere?

NASA, Congress, and various other federal agencies were apparently convinced to spend millions of dollars building an extensive new facility and take extensive other measures to address this possibility.

This is how a completely abstract argument about alien germs was taken seriously and mitigated at great effort and expense during the 1969 Apollo landing.

Continue reading

Will the growing deer prion epidemic spread to humans? Why not?

Helpful background reading: What’s the deal with prions?

A novel lethal infectious neurological disease emerged in American deer a few decades ago. Since then, it’s spread rapidly across the continent. In areas where the disease is found, it can be very common in the deer there.

Map from the Cornell Wildlife Health Lab.

Chronic wasting disease isn’t caused by a bacteria, virus, protist, or worm – it’s a prion, which is a little misshapen version of a protein that occurs naturally in the nervous systems of deer.

Chemically, the prion is made of exactly the same stuff as its regular counterpart – it’s a string of the same amino acids in the same order, just shaped a little differently. Both the prion and its regular version (PrP) are monomers, single units that naturally stack on top of each other or very similar proteins. The prion’s trick is that as other PrP moves to stack atop it, the prion reshapes them – just a little – so that they also become prions. These chains of prions are quite stable, and, over time, they form long, persistent clusters in the tissue of their victims.

We know of only a few prion diseases in humans. They’re caused by random chance misfolds, a genetic predisposition for PrP to misfold into a prion, accidental cross-contamination via medical supplies, or, rarely, from the consumption of prion-infected meat. Every known animal prions is a misfold of the same specific protein, PrP. PrP is expressed in the nervous system, particularly in the brain – so infections cause neurological symptoms and physical changes to the structure of the brain. Prion diseases are slow to develop (up to decades), incurable, and always fatal.

There are two known infectious prion diseases in people. One is kuru, which caused an epidemic among tribes who practiced funerary cannibalism in Papua New Guinea. The other is mad cow disease, also known as bovine spongiform encephalopathy (BSE) AKA Variant Creutzfeldt-Jakob disease, which was first seen in humans in 1996 in the UK, and comes from cows.

Chronic wasting disease (CWD)…

  • Is, like every other animal prion disease, a misfold of PrP. PrP is quite similar in both humans and deer.
  • Is found in multiple deer species which are commonly eaten by humans.
  • Can be carried in deer asymptomatically.

But it doesn’t seem to infect people. Is it ever going to? If a newly-emerged virus were sweeping across the US and killing deer, which could be spread through consuming infected meat, I would think “oh NO.” I’d need to see very good evidence to stop sounding the alarm.

Now, the fact that it’s been a few decades, and it hasn’t spread to humans yet, is definitely some kind of evidence about safety. But are we humans basically safe from it, or are we living on borrowed time? If you live in an area where CWD has been detected, should you eat the deer?

Sidenote: Usually, you’ll see “BSE” used for the disease in cows and “VCJ” for the disease in humans. But they’re caused by the same agent and this essay is operating under a zoonotic One Health kind of stance, so I’m just calling the disease BSE here. (As well as the prion that causes it, when I can get away with it.)

In short

The current version of CWD is not infectious to people. We checked. BSE showed that prions can spill over, and there’s no reason a new CWD variant will never do the same. The more cases there are, the more likely it is to spill over. That said, BSE did not spill over very effectively. It was always incredibly rare in humans. It’s an awful disease to get, but the chance of getting it is tiny. Prions in general have a harder time spilling over between species than viruses do. CWD might behave somewhat differently but probably will stay hampered by the species barrier.

Why do I think all of this? Keep reading.

North American elk (wapiti), which can carry CWD. This and the image at the top of the article are adapted from a photo from the Idaho Fish and Game department, under a CC BY 2.0 license.

Prions aren’t viruses

I said before that if a fatal neurological virus were infecting deer across the US, and showed up in cooked infected meat, my default assumption would be “we’re in danger.” But a prion isn’t a virus. Why does that matter?

Let’s look at how they replicate. A virus is a little bit of genetic material in a protein coating. You, a human, are a lot of genetic in a protein coating. When a virus replicates, it slips into your cells, and it hijacks your replication machinery to run its genes instead. Instead of all the useful-to-you tasks your genome has planned, the virus’s genome outlines its own replication, assembles a bunch more viruses, and blows up the factory (cell) to turn them loose into the world.

In other words, the virus using a robust information-handling system that both you and it have in common – the DNA → RNA → protein pipeline often called “the central dogma” of biology. To a first approximation, you can just add any genetic information at all into the viral genome, and as long as it doesn’t interfere with the virus’s process, whatever you add will get replicated in there too.

Prions do not work like this. They don’t tap into the central dogma. What makes them so fundamentally cool is that they replicate without touching the replication machinery that everything else alive uses – their replication is structural, like a snowflake forming. The host provides raw material in the form of PrP, and the prion – once it lands – encourages that material to shape in the right way for more to form atop it.

What this means is that you can’t encode arbitrary information into a prion. This isn’t just a factor – it’s not as though a prion runs on a separate “protein genome” we could decipher and then encode what we like into. The entire structure of the prion has to work together to replicate itself. If you made a prion with some different fold in it, that fold has to not just form a stable protein, but to pass itself along as well. They don’t have a handy DNA replicase enzyme to outsource to – they have to solve the problem of replication themselves, every time.

Prions can evolve, but they do it less – they have fewer free options, they’re more constrained than a virus would be in terms of changes that don’t interrupt the rest of the refolding process and that on top of that promulgate themselves.

This means that prions are slower to evolve than viruses. …I’m pretty sure, at least. It makes a lot of sense to me. The thing that this definitely means is that:

It’s very hard for prions to cross species barriers

PrP is a very conserved protein across mammals, meaning that all mammals have a version of PrP that’s pretty similar – 90%+ similarity.* But the devil lies in that 10%.

Prions are finely tuned – to convert PrP to a prion, it basically needs to be identical, or at least functionally identical, everywhere the prion works. It not just needs to be susceptible to the prion’s misfolding, it also needs to fold into something that itself can replicate. A few amino acid differences can throw a wrench in the works.

It’s clear that infectious prions can have a hard time crossing species barriers. It depends on the strain. For instance: Mouse prions convert hamster PrP.** Hamster prions don’t convert mouse PrP. Usually a prion strain converts its usual host PrP best, but one cat prion more efficiently converts cow PrP. In a test tube, CWD can convert human or cow PrP a little, but shows slightly more action with sheep PrP (and much more with, of course, deer PrP.)

This sounds terribly arbitrary. But remember, prion behavior comes down to shape. Imagine you’re playing with legos and duplo blocks. You can stack legos on legos and duplos on duplos. You can also put a duplo on top of a lego block. But then you can only add duplo blocks on top of that – you’ve permanently changed what can get added to that stack.

When we look at people – or deer, or sheep, etc – who are genetically resistant to prions (more on that later), we find that serious resistance can be conferred by single nucleic acid changes in the PrP gene. Tweak one single letter of DNA in the right place, and their PrP just doesn’t bend into the prion shape easily. If the infection takes, it proceeds slower slow enough a person might die of old age before the prion would kill them.

So if a decent number of members of a species can be resistant to prion diseases, based on as little as one amino acid – then a new species, one that might have dozens of different amino acids in the PrP gene, is unlikely to be fertile ground for an old prion.

* (This is kind of weird given that we don’t know what PrP actually does – the name PrP just stands “prion protein” because it’s the protein that’s associated with prions, and we don’t know its function. We can genetically alter mice so that they don’t produce PrP at all, and they show slight cognitive issues but they’re basically fine. Classic evolution. It’s appendices all over again.)
** Sidebar: When we look at studies for this, we see that like a lot of pathology research, there's a spectrum of experiments on different points on the axis from “deeply unrealistic” to “a pretty reasonable simulacrum of natural infection”, like:

1. Shaking up loose prions and PrP in a petri dish and seeing if the PrP converts

2. Intracranial injection with brain matter (i.e. grinding up a diseased brain and injecting some of that nasty juice into the brain of a healthy animal and seeing if it gets sick)

3. Feeding (or some other natural route of exposure) a plausible natural dose of prions to a healthy animal and seeing if that animal gets sick

The experiments mentioned below are based on 1. Only experiments that do 3 actually prove the disease is naturally infectious. For instance, Alzheimer’s disease is “infectious” if you do 2, but since nobody does that, it’s not actually a contagious threat. That said, doing more-abstracted experiments means you can really zoom in on what makes strain specificity tick. 

But prions do cross species barriers

Probably the best counterargument to everything above is that another prion disease, BSE, did cross the species barrier. This prion pulled off a balancing act: it successfully infected cows and humans at the same time.

Let’s be clear about one big and interesting thing: BSE is not good at crossing the species barrier. When I say this, I mean two things:

First, people did not get it often. While the big UK outbreak was famously terrifying, only around 200 people ever got sick from mad cow disease. Around 200,000 cows tested positive for it. But most cows weren’t tested. Researchers estimate that 2 million cows total in the UK had BSE, most of which were slaughtered and entered the food chain. These days, Britain has 2 million cows at any given time.

At first glance, and to a first approximation, I think everyone living in the UK for a while between 1985 and 1996 or so (who ate beef sometimes) must have eaten beef from an infected animal. That’s approximately who the recently-overturned blood donation ban in the US affected. I had thought that was sort of an average over who was at risk of exposure – but no, that basically encompassed everyone who was exposed. Exposure rarely leads to infection.

You’re more likely to get struck by lightning than to get BSE even if you have eaten BSE-infected beef.

Second, in the rare cases the disease takes, it’s slower. Farm cows live short lives, and the cows that died from BSE would have gotten old for the beef industry at 4-5 years post-exposure. They survived at most weeks or months after symptoms began. Humans infected with BSE, meanwhile, can harbor it for up to decades post-exposure, and live an average of over a year after showing symptoms.

I think both of these are directly attributable to the prion just being less efficient at converting human PrP – versus the PrP of the cows it was adapted to. It doesn’t often catch on in the brain. When it does, it moves extremely slowly.

But it did cross over. And as far as I can tell, there’s no reason CWD can’t do the same. Like viruses, CWD has been observed to evolve as it bounces between hosts with different genotypes. Some variants of CWD seem more capable of converting mouse PrP than the common ones. The good old friend of those who play god, serial passaging, encourages it.

(Note also that all of the above differs from kuru, which did cause a proper epidemic. Kuru spread between humans and was adapted for spreading in humans. When looking to CWD, BSE is a better reference point because it spread between cows and only incidentally jumped to humans – it was never adapted for human spread.)

How is CWD different from BSE?

BSE appears in very low, very low numbers anywhere outside the brains and spines of its victims. CWD is also concentrated in the brains, but also appears in the spines and lymphatic tissue, and to a lesser but still-present degree, everywhere else: muscle, antler velvet, feces, blood, saliva. It’s more systematic than BSE.

Cows are concentrated in farms, and so are some deer, but wild deer carry CWD all hither and yon. As they do, they leave it behind in:

  • Feces – Infected deer shed prions in their feces. An animal that eats an infected deer might also shed prions in its feces.
  • Bodies – Deer aren’t strictly herbivorous if push comes to shove. If a deer dies, another deer might eat the body. One study found that after a population of reindeer started regularly gnawing on each other’s antlers (#JustDeerThings), CWD swept in.
  • Dirt – Prions are resilient and can linger, viable, in soil. Deer eat dirt accidentally while eating grass, as well as on purpose from time to time and can be infected.
  • Grass – Prions in the soil or otherwise deposited onto plant tissue can hang out in living grass for a long time.
  • Ticks – One study found that ticks fed CWD prions don’t degrade the protein. If they’re then eaten by deer (for instance, during grooming), they could spread CWD. This study isn’t perfect evidence; the authors note that they fed the ticks a concentration of prions about 1000x higher than is found in infected deer blood. But if my understanding of statistics and infection dynamics is correct, that suggests that maybe 1 in 1000 ticks feeding on infected deer blood reaches that level of infectivity? Deer have a lot of ticks! Still pretty bad!

That’s a lot of widespread potentially-infectious material.

When CWD is in an area, it can be very common – up to 30% of wild deer, and up to 90% of deer on an infected farm. These deer can carry CWD and have it in their tissues for quite some time asymptomatically – so while it frequently has very visible behavioral and physical symptoms, it also sometimes doesn’t.

In short, there’s a lot of CWD in lots of places through the environment. It’s also spreading very rapidly. If a variant capable of infecting both deer and humans emerged, there would be a lot of chances for possible exposure.

Deer on a New Zealand deer farm. By LBM1948, under a CC BY-SA 4.0 license.

What to do?

As an individual

As with any circumstance at all, COVID or salmonella or just living in a world that is sometimes out to get you, you have to choose what level of risk you’re alright with. At first, writing this piece, I was going to make a suggestion like “definitely avoid eating deer from areas that have CWD just in case your deer is the one that has a human-transmissible prion disease.” I made a little chart about my sense of the relative risk levels, to help put the risk in scale even though it wasn’t quantified. It went like this:

Imagine a spectrum of risk of getting a prion disease. On one end, which we could call "don't do this", is "eating beef from an animal with BSE". Close to that but slightly less risky is "eating deer from an animal with CWD". On the other very safe end is "eating beef from somewhere with known active BSE cases". This entire model is wrong, though.

But, as usual, quantification turns out to be pretty important. I actually did the numbers about how many people ever got sick from BSE (~200) and how many BSE-infected cows were in the food chain (~2,000,000), which made the actual risk clear. So I guess the more prosaic version looks like this:

Remember that spectrum of risk? Well, all of these risks are infinitesimal. Worry about something else! Eating beef from an animal with BSE is still more dangerous than eating deer from an animal with CWD, which is more dangerous than eating beef from somewhere without known active BSE cases - but all of these are clustered very, very far on the safe side of the graph.

…This is sort of a joke, to be clear. There’s not a health agency anywhere on earth that will advise you to eat meat from cows known to have BSE, and the CDC recommends not eating meat from deer that test positive for CWD (though it’s never infected a human before.)

On top of that, the overall threat is still uncertain because what you’re betting on is “the chance that this animal will have had an as-of-yet undetected CWD variant that can infect humans.” There’s inherently no baseline for that!

We don’t know what CWD would act like if it spilled over. It might be more infectious and dangerous than other infectious prion diseases we’ve seen – remember, with humans, the sample size is 2! So if CWD is in your area and it’s not a hardship to avoid eating deer, you might want to steer clear. …But the odds are in your favor.

As a society

There’s not an obvious solution. The epidemic spreading among deer isn’t caused by a political problem, it’s from nature.

The US is doing a lot right: mainly, it is monitoring and tracking the spread of the disease. It’s spreading the word. (If nothing else, you can keep track of this by subscribing to google alerts for “chronic wasting disease”, and then pretty often you’ll get an email saying things like “CWD found in Florida for the first time” or “CWD found an hour from you for the first time.”) It is encouraging people to submit deer heads for testing, and not to eat meat from deer that test positive. The CDC, APHIS, Fish & Wildlife Service, and more are all aware of the problem and participating in tracking it.

What more could be done? Well, a lot of the things that would help a potential spillover of CWD look like actions that can be taken in advance of any threatening novel disease. There is research being done on prions and how they cause disease, better diagnostics, and possible therapeutics. All of these are important. Prion disease diagnosis and treatment is inherently difficult, and on top of that, has little overlap with most kinds of diagnosis or treatment. It’s also such a rare set of diseases that it’s not terribly well studied. (My understanding is that right now there are various kinds of tests for specific prion diseases – which could be adapted for a new prion disease – that are extremely sensitive although not particularly cheap or widespread.)

I don’t know a lot about the regulatory or surveillance situation vis-a-vis deer farms, or for that matter, much about deer farms at all. I do know that they seem to be associated with outbreaks, and heavy disease prevalence once there is an outbreak. That’s a smart area to an eye on.

If CWD did spill over, what would happens?

It will probably also take time to locate cases and identify the culprit, but given the aforementioned awareness and surveillance of the issue, it ought to take way less time than it took to identify the causative agent of BSE. Officials are already paying attention to deaths that could potentially be CWD-related, like neurodegenerative illnesses that kill young people.

First, everyone gets very nervous about eating venison for a while.

After that, I expect the effects will look a lot like the aftermath of mad cow disease. Mad cow disease, and very likely a hypothetical CWD spillover, would not be transmissible between people in usual ways – coughing, skin contact, fomites, whatever.

It is transmissible via unnatural routes, which is to say, blood transfusions. You might remember how people who’d spent over 6 months in Britain couldn’t donate blood in the US until 2022, a direct response to the BSE outbreak. Yes, the disease was extremely rare, but unless you can quickly and cheaply test incoming blood donations, a donor could donate blood to multiple people. Suppose some of them donate blood down the line. You’d have a chain of infection and a disease with a potentially decades-long incubation period. And remember, the disease is incurable and fatal. So basically, the blood donation system (and probably other organ donation) becomes very problematic.

That said, I don’t think it would break down completely. In the BSE case, lots of people in the UK eat beef from time to time – probably most people. But with a deerborne disease, I would guess that a lot of the US population could confidently declare that they haven’t eaten deer within the past, say, year or so (prior to a detected outbreak.) So I think there’d be panic and perhaps strain on the system but not necessarily a complete breakdown. Again, all of this is predicated on a new prion disease working like known human prion diseases.

Genetic resistance

One final fun fact: People who have a certain allele in the PrP gene – specifically, have the genotype PRNP 129M/V or V/V – are incredibly genetically resistant to known infectious prion diseases. If they do get infected, they survive for much longer.

It’s also not clear that this would hold true for a hypothetical CWD crossover to humans. But it is true for both kuru and BSE. It’s also partly (although not totally) protective against sporadic Creutzfeldt-Jakob disease.

If you’ve gotten a service like 23&me, maybe check out your data and see if you’re resistant to infectious prion diseases. Here’s what you’re looking for:

129M/V or V/V (amino acids), or G/G or A/G (nucleotides) – rs1799990

If you instead have M/M (amino acids) or A/A (nucleotides) at that site, you’re SOL at a higher but still very low overall risk.


Final thoughts

  • I think exercises like “if XYZ disease emerges, what will the ramifications and response be” are valuable. They lead to questions like “what problems will seem obvious in retrospect” and “how can we build systems now that will improve outcomes of disasters.” This is an interesting case study and I might revisit it later.

  • Has anyone reading this ever been struck by lightning? That’s the go-to comparison for things being rare. But 1 in 15,000 isn’t, like, unthinkably rare. I’m just curious.

  • No, seriously, what’s the deal with deer farms? I never think about deer farms much. When I think of venison, I imagine someone wearing camo and carrying a rifle out into a national forest or a buddy’s backyard or something. How many deer are harvest from hunting vs. farms? What about in the US vs. worldwide? Does anyone know? Tell me in the comments.

This essay was crossposted to LessWrong. Also linked at the EA Forums.

If you want to encourage my work, check out my Patreon. Today’s my birthday! I sure would appreciate your support.

Also, this eukaryote is job-hunting. If you have or know of a full-time position for a researcher, analyst, and communicator with a Master’s in Biodefense, let me know:

Eukaryote Writes Blog (at) gmail (dot) com

In the mean time, perhaps you have other desires. You’d like a one-off research project, or there’s a burning question you’d love a well-cited answer to. Maybe you want someone to fact-check or punch up your work. Either way, you’d like to buy a few hours of my time. Well, I have hours, and the getting is good. Hit me up! Let’s chat. 🐟

Woodblock print of swimming prawns

Eukaryote in Asterisk Magazine + New Patreon Per-post setup

Eukaryote elsewhere

I have an article in the latest issue of Asterisk Magazine. After you get really deep into the weeds of invertebrate sentience and fish welfare and the scale of factory farming, what do you do with that information vis-a-vis what you feel comfortable eating? Here’s what I’ve landed on and why. Read the piece that Scott Alexander characterized as making me sound more annoying to eat with than I really am.

(Also check out the full piece of delightful accompanying art from Karol Banach.)

Check out the rest of the issue as well. Favorites include:

A new better Patreon has landed

This blog has a Patreon! Again! I’m switching from the old per month payment model to a new pay per post system, since this blog has not been emitting regular monthly updates in quite some time. So if you get excited when you see Eukaryote Writes Blog in your feed, and you want to incentivize more of that kind of thing, try this new and improved system for giving me money.

Here’s the link. Consider a small donation per post. Direct incentives: Lots of people are fans. I’m no effective charity but the consistent revenue does have a concrete and pleasant impact on my life right now, so I do really appreciate it.

It’s important to me that the things I write here are freely available. This will continue to be true! I might think of some short bits of content that will be patron-exclusive down the line, but anything major? Your local eukaryote is here to write a blog, not a subscription service. It’s in the name.

Helpful notes

  • To be clear, the payment will trigger per substantial new post. Updates of content elsewhere, metablogging like this, short corrections, etc, won’t count.
  • You can set a monthly limit in Patreon, even with the per-post model. For the record, I think it’s unlikely I’d put out more than 1-2 posts per month even in the long term future.
  • And of course, you can change your payment or unsubscribe at any old time you please.
Woodblock print of swimming prawns
Excerpt of Horse Mackerel (Aji) with Shrimp or Prawn, by Utagawa Hiroshige, ~1822-23. Public Domain.
An old knit tube with colorful stripes

Who invented knitting? The plot thickens

Last time on Eukaryote Writes Blog: You learned about knitting history.

You thought you were done learning about knitting history? You fool. You buffoon. I wanted to double check some things in the last post and found out that the origins of knitting are even weirder than I guessed.

Humans have been wearing clothes to hide our sinful sinful bodies from each other for maybe about 20,000 years. To make clothes, you need cloth. One way to make cloth is animal skin or membrane, that is, leather. If you want to use it in any complicated or efficient way, you also need some way to sew that – very thin strips of leather, or taking sinew or plant fiber and spinning it into thread. Also popular since very early on is taking that thread, and turning it into cloth. There are a few ways to do this.

A drawing showing loose fiber, which turns into twisted thread, which is arranged in various ways to make different kinds of fabric structures. Depicted are the structures for: naalbound, woven, knit, looped, and twined fabric.
By the way, I’m going to be referring to “thread” and “yarn” interchangeably from here on out. Don’t worry about it.

(Can you just sort of smush the fiber into cloth without making it into thread? Yes. This is called felting. How well it works depends on the material properties of the fiber. A lot of traditional Pacific Island cloth was felted from tree bark.)

Now with all of these, you could probably make some kind of cloth by taking threads and, by hand, shaping them into these different structures. But that sounds exhausting and nobody did that. Let’s get tools involved. These different structures correspond to some different kind of manufacturing technique.

By far, the most popular way of making cloth is weaving. Everyone has been weaving for tens of thousands of years. It’s not quite a cultural universal but it’s damn close. To weave, you need a loom.1 There are ten million kinds of loom. Most primitive looms can make a piece of cloth that is, at most, the size of the loom. So if you want to make a tunic that’s three feet wide and four feet long, you need cloth that’s at least three feet wide and four feet long, and thus, a loom that’s at least three feet wide and four feet long. You can see how weaving was often a stationary affair.

Recap

Here’s what I said in the last post: Knitting is interesting because the manufacturing process is pretty simple, needs simple tools, and is portable. The final result is also warm and stretchy, and can be made in various shapes (not just flat sheets). And yet, it was invented fairly recently in human history.

I mostly stand by what I said in the last post. But since then I’ve found some incredible resources, particularly the scholarly blogs Loopholes by Cary “stringbed” Karp and Nalbound by Anne Marie Deckerson, which have sent me down new rabbit-holes. The Egyptian knit socks I outlined in the last post sure do seem to be the first known knit garments, like, a piece of clothing that is meant to cover your body. They’re certainly the first known ones that take advantage of knitting’s unique properties: of being stretchy, of being manufacturable in arbitrary shapes. The earliest knitting is… weirder.

SCA websites

Quick sidenote – I got into knitting because, in grad school, I decided that in the interests of well-roundedness and my ocular health, I needed hobbies that didn’t involve reading research papers. (You can see how far I got with that). So I did two things: I started playing the autoharp, and I learned how to knit. Then, I was interested in the overlap between nerds and handicrafts, so a friend in the Society for Creative Anachronism pitched me on it and took me to a coronation. I was hooked. The SCA covers “the medieval period”; usually, 1000 CE through 1600 CE.

I first got into the history of knitting because I was checking if knitting counted as a medieval period art form. I was surprised to find that the answer was “yes, but barely.” As I kept looking, a lot of the really good literature and analysis – especially experimental archaeology – came out of blogs of people who were into it as a hobby, or perhaps as a lifestyle that had turned into a job like historical reenactment. This included a lot of people in the SCA, who had gone into these depths before and just wrote down what they found and published it for someone else to find. It’s a really lovely knowledge tradition to find one’s self a part of.

Aren’t you forgetting sprang?

There’s an ancient technique that gets some of the benefits of knitting, which I didn’t get to in the last post. It’s called sprang. Mechanically, it’s kind of like braiding. Like weaving, sprang requires a loom (the size of the cloth it produces) and makes a flat sheet. Like knitting, however, it’s stretchy.

Sprang shows up in lots of places – the oldest in 1400 BCE in Denmark, but also other places in Europe, plus (before colonization!): Egypt, the Middle East, centrals Asia, India, Peru, Wisconsin, and the North American Southwest. Here’s a video where re-enactor Sally Pointer makes a sprang hairnet with iron-age materials.

Despite being widespread, it was never a common way to make cloth – everyone was already weaving. The question of the hour is: Was it used to make socks?

Well, there were probably sprang leggings. Dagmar Drinkler has made historically-inspired sprang leggings, which demonstrate that sprang colorwork creates some of the intricate designs we see painted on Greek statues – like this 480 BCE Persian archer.

I haven’t found any attestations of historical sprang socks. The Sprang Lady has made some, but they’re either tube socks or have separately knitted soles.

Why weren’t there sprang socks? Why didn’t sprang, widespread as it is, take on the niche that knitting took?

I think there are two reasons. One, remember that a sock is a shaped-garment, tube-like, usually with a bend at the heel, and that like weaving, sprang makes a flat sheet. If you want another shape, you have to sew it in. It’s going to lose some stretch where it’s sewn at the seam. It’s just more steps and skills than knitting a sock.

The second reason is warmth. I’ve never done sprang myself – from what I can tell, it has more of a net-like openness upon manufacture, unlike knitting which comes with some depth to it. Even weaving can easily be made pretty dense simply by putting the threads close together. I think, overall, a sprang fabric garment made with primitive materials is going to be less warm than a knit garment made with primitive materials.

Those are my guesses. I bring it up merely to note that there was another thread → cloth technique that made stretchy things that didn’t catch on the same way knitting did. If you’re interested in sprang, I cannot recommend The Sprang Lady’s work highly enough.

Anyway, let’s get back to knitting.

Knitting looms

The whole thing about roman dodecahedrons being (hypothetically) used to knit glove fingers, described in the last post? I don’t think that was actually the intended purpose, for the reasons I described re: knitting wasn’t invented yet. But I will cop to the best argument in its favor, which is that you can knit with glove fingers with a roman dodecahedron.

“But how?” say those of you not deeply familiar with various fiber arts. “That’s not needles,” you say.

You got me there. This is a variant of a knitting loom. A knitting loom is a hoop with pegs to make knit tubes. This can be the basis of a knitting machine, but you can also knit on one on its own.. They make more consistent knit tubes with less required hand-eye coordination. (You can also make flat panels with them, especially a version called a knitting rake, but since all of the early knitting we’re talking about are tubes anyhow, let’s ignore that for the time being.)

Knitting on a modern knitting loom. || Photo from Cynthia M. Parker on flickr, under a CC BY-SA 2.0 license.

Knitting on a loom is also called spool knitting (because you can use a spool with nails in it as the loom for knitting a cord) and tomboy knitting (…okay). Structurally, I think this is also basically the same thing as lucet cord-making, so let’s go ahead and throw that in with this family of techniques. (The earliest lucets are from ~1000 CE Viking Sweden and perhaps medieval Viking Britain.)

The important thing to note is that loom knitting makes a result that is, structurally, knit. It’s difficult to tell whether a given piece is knit with a loom or needles, if you didn’t see it being made. But since it’s a different technique, different aspects become easier or harder.

A knitting loom sounds complicated but isn’t hard to make, is the thing. Once you have nails, you can make one easily by putting them in a wood ring. You could probably carve one from wood with primitive tools. Or forge one. So we have the question: Did knitting needles or knitting looms come first?

We actually have no idea. There aren’t objects that are really clearly knitting needles OR knitting looms until long after the earliest pieces of knitting. This strikes me as a little odd, since wood and especially metal should preserve better than fabric, but it’s what we’ve got. It’s probably not helped by the fact that knitting needles are basically just smooth straight sticks, and it’s hard to say that any smooth straight stick is conclusively a knitting needle (unless you find it with half a sock still on it.)

(At least one author, Isela Phelps, speculates that finger-knitting, which uses the fingers of one hand like a knitting loom and makes a chunky knit ribbon, came first – presumably because, well, it’s easier to start from no tools than to start from a specialized tool. This is possible, although the earliest knit objects are too fine and have too many stitches to have been finger-knit. The creators must have used tools.)

(stringbed also points out that a piece of whale baleen can be used as circular knitting needles, and that the relevant cultures did have access to and trade in whale parts. Although while we have no particular evidence that they were used as such, it does mean that humanity wouldn’t have to invent plastic before inventing the circular knitting needle, we could have had that since the prehistoric period. So, I don’t know, maybe it was whales.)

THE first knitting

The earliest knit objects we have… ugh. It’s not the Egyptian socks. It’s this.

Photo of an old, long, thin knit tube in lots of striped colors.
One of the oldest knit objects. || Photo from Musée du Louvre, AF 6027.

There are a pair of long, thin, colorful knit tubes, about an inch wide, a few feet long. They’re pretty similar to each other. Due to the problems inherent in time passing and the flow of knowledge, we know one of them is probably from Egypt, and was carbon-dated to 425-594 CE. The other quite similar tube, of a similar age, has not been carbon dated but is definitely from Egypt. (The original source text for this second artifact is in German, so I didn’t bother trying to find it, and instead refer to stringbed’s analysis. See also matthewpius guestblogging on Loopholes.) So between the two of them, we have a strong guess that these knit tubes were manufactured in Egypt around 425-594 CE, about 500 years before socks.

People think it was used as a belt.

This is wild to me. Knitting is stretchy, and I did make fun of those peasants in 1300 CE for not having elastic waistlines, so I could see a knitted belt being more comfortable than other kinds of belts.2 But not a lot better. A narrow knit belt isn’t going to be distribute most of the force onto the body too differently than a regular non-stretchy belt, and regular non-stretchy belts were already in great supply – woven, rope, leather, etc. Someone invented a whole new means of cloth manufacture and used it to make a thing that existed slightly differently.

Then, as far as I can tell, there are no knit objects in the known historical record for five hundred years until the Egyptian socks pop up.

Pulling objects out of the past is hard. Especially things made from cloth or animal fibers, which rot (as compared to metal, pottery, rocks, bones, which last so long that in the absence of other evidence, we name ancient cultures based on them.) But every now and then, we can. We’ve found older bodies and textiles preserved in ice and bogs and swamps.3 We have evidence of weaving looms and sewing needles and pictures of people spinning or weaving cloth and descriptions of them doing it, from before and after. I’m guessing that the technology just took a very long time to diversify beyond belts.

Speaking of which: how was the belt made? As mentioned, we don’t find anything until much later that is conclusively a knitting needle or a knitting loom. The belts are also, according to matthewpius on loopholes, made with a structure called double knitting. The effect is (as indicated by Pallia – another historic reenactor blog!) kind of hard to do with knitting needles in the way they achieved it, but pretty simple to do with a knitting loom.

(Another Egyptian knit tube belt from an unclear number of centuries later.)

Viking knitting

You think this is bad? Remember before how I said knitting was a way of manufacturing cloth, but that it was also definable as a specific structure of a thread, that could be made with different methods?

The oldest knit object in Europe might be a cup.

Photo of a richly decorated old silver cup.
The Ardagh Chalice. || Photo by Sailko under a CC BY-SA 3.0 license.

You gotta flip it over.

Another photo of the ornate chalice from the equally ornate bottom. Red arrows point to some intricate wire decorations around the rim.
Underside of the Ardagh Chalice. || Adapted from a Metroplitan Museum image.

Enhance.

Black and white zoom in on the wire decorations. It's more  clearly a knit structure.
Photo from Robert M. Organ’s 1963 article “Examination of the Ardagh Chalice-A Case History”, where they let some people take the cup apart and put it back together after.

That’s right, this decoration on the bottom of the Ardagh Chalice is knit from wire.
Another example is the decoration on the side of the Derrynaflen Paten, a plate made in 700 or 800 CE in Ireland. All the examples seem to be from churches, hidden by or from Vikings. Over the next few hundred years, there are some other objects in this technique. They’re tubes knitted from silver wire. “Wait, can you knit with wire?” Yes. Stringbed points out that knitting wire with needles or a knitting loom would be tough on the valuable silver wire – they could break or distort it.

Photo of an ornate silver plate with gold decorations. There are silver knit wire tubes around the edge.
The Derrynaflen Patten, zoomed in on the knit decorations at the end. || Adapted from this photo by Johnbod, under a CC BY-SA 3.0 license.

What would make sense to do it with is a little hook, like a crochet hook. But that would only work on wire – yarn doesn’t have the structural integrity to be knit with just a hook, you need to support each of the active loops.

So was the knit structure just invented separately by Viking silversmiths, before it spread to anyone else? I think it might have been. It’s just such a long time before we see knit cloth, and we have this other plausible story for how the cloth got there.

(I wondered if there was a connection between the Viking knitting and their sources of silver. Vikings did get their silver from the Islamic world, but as far as I can tell, mostly from Iran, which is pretty far from Egypt and doesn’t have an ancient knitting history – so I can’t find any connection there.)

The Egyptian socks

Let’s go back to those first knit garments (that aren’t belts), the Egyptian knit blue-and-white socks. There are maybe a few dozen of these, now found in museums around the world. They seem to have been pulled out of Egypt (people think Kustat) by various European/American collectors. People think that they were made around 1000-1300 AD. The socks are quite similar: knit, made of cotton, in white and 1-3 shades of indigo, geometric designs sometimes including Kufic characters.

I can’t find a specific origin location (than “probably Egypt, maybe Kustat?”) for any of them. The possible first sock mentioned in the last post is one of these – I don’t know if there are any particular reasons for thinking that sock is older than the others.

This one doesn’t seem to be knit OR naalbound. Anne Marie Decker at Nalbound.com thinks it’s crocheted and that the date is just completely wrong. To me, at least, this cast doubts on all the other dates of similar-looking socks.

That anomalous sock scared me. What if none of them had been carbon-dated? Oh my god, they’re probably all scams and knitting was invented in 1400 and I’m wrong about everything. But I was told in a historical knitting facebook group that at least one had been dated. I found the article, and a friend from a minecraft discord helped me out with an interlibrary loan. I was able to locate the publication where Antoine de Moor, Chris Verhecken-Lammens and Mark Van Strydonck did in fact carbon-date four ancient blue-and-white knit cotton socks and found that they dated back to approximately 1100 CE – with a 95% chance that they were made somewhere between 1062 and 1149 CE. Success!

Helpful research tip: for the few times when the SCA websites fail you, try your facebook groups and your minecraft discords.

Estonian mitten

Photo of a tattered old fragment of knitting. There are some colored designs on it in blue and red.
Yeah, this is all of it. Archeology is HARD. [Image from Anneke Lyffland’s writeup.]

Also, here’s a knit fragment of a mitten found in Estonia. (I don’t have the expertise or the mitten to determine it myself, but Anneke Lyffland (another SCA name), a scholar who studied one is aware of cross-knit-looped naalbinding – like the Peruvian knit-lookalikes mentioned in the last post – and doesn’t believe this was naalbound.) It was part of a burial that was dated from 1238 – 1299 CE. This is fascinating and does suggest a culture of knitted practical objects, in Eastern Europe, in this time period. This is the earliest East European non-sock knit fabric garment that I’m aware of.

But as far as I know, this is just the one mitten. I don’t know much about archaeology in the area and era, and can’t speculate as to whether this is evidence that knitting was rare or whether we have very few wool textiles from the area and it’s not that surprising. (The voice of shoulder-Thomas-Bayes says: Lots of things are evidence! Okay, I can’t speculate as to whether it’s strong evidence, are you happy, Reverend Bayes?) Then again, a bunch of speculation in this post is also based on two maybe-belts, so, oh well. Take this with salt.

By the way, remember when I said crochet was super-duper modern, like invented in the 1700s?

Literally a few days ago, who but the dream team of Cary “stringbed” Karp and Anne Marie Decker published an article in Archaeological Textiles Review identifying several ancient probably-Egyptian socks thought to be naalbound as being actually crocheted.

This comes down to the thing about fabric structures versus techniques. There’s a structure called slip stitch that can be either crocheted or naalbound. So since we know naalbinding is that old, so if you’re looking at an old garment and see slip stitch, maybe you say it was naalbound. But basically no fabric garment is just continuous structure all the way through. How do the edges work? How did it start and stop? Are there any pieces worked differently, like the turning of a heel or a cuff or a border? Those parts might be more clearly worked with crochet hook than a naalbinding needle. And indeed, that’s what Karp and Decker found. This might mean that those pieces are forgeries – no carbon dating. But it might mean crochet is much much older than previously thought.

My hypothesis

Knitting was invented sometime around or perhaps before 600 CE in Egypt.

From Egypt, it spreads to other Muslim regions.

It spread into Europe via one or more of these:

  1. Ordinary cultural diffusion northwards
  2. Islamic influence in the Iberian Peninsula
    • In 711 CE, Al-Andalus was conquered by the Umayyad Caliphate…
      • Kicking off a lot of Islamic presence in and control over the area up until 1400 CE or so…
  3. Meanwhile, starting in 1095 CE, the Latin Church called for armies to take Jerusalem from the Byzantines, kicking off the Crusades.
    • …Peppering Arabic influences into Europe, particularly France, over the next couple centuries.

… Also, the Vikings were there. They separately invented the knitting structure in wire, but never got around to trying it out in cloth, perhaps because the required technique was different.

Another possibility

Wrynne, AKA Baronness Rhiall of Wystandesdon (what did I say about SCA websites?), a woman who knows a thing or two about socks, believes that based on these plus the design of other historical knit socks, the route goes something like:

??? points to Iran, which points to: A. Eastern Europe, then to 1. Norway and Sweeden and 2. Russia. B. to ???, to Spain, to Western Europe.

I don’t know enough about socks to have an sophisticated opinion on her evidence, but the reasoning seems solid to me. For instance, as she explains, old Western European socks are knit from the cuff of the sock down, whereas old Middle Eastern and East European socks are knit from the toe of the sock up – which is also how Eastern and Northern European naalbound socks were shaped. Baronness Rhiall thinks Western Europe invented its sockmaking techniques independently based only having had a little experience with a few late 1200s/1300s knit pieces from Moorish artisans.

What about tools?

Here’s my best guess: The Egyptian tubes were made on knitting looms.

The viking tubes were invented separately, made with a metal hook as stringbed speculates, and never had any particular connection to knitting yarn.

At some point, in the Middle East, someone figured out knitting needles. The Egyptian socks and Estonian mitten and most other things were knit in the round on double-ended needles.

I don’t like this as an explanation, mostly because of how it posits 3 separate tools involved in the earliest knit structures – that seems overly complicated. But it’s what I’ve got.

Knitting in the tracks of naalbinding

I don’t know if this is anything, but here are some places we also find lots of naalbinding, beginning from well before the medieval period: Egypt. Oman. The UAE. Syria. Israel. Denmark. Norway. Sweden. Sort of the same path that we predict knitting traveled in.

I don’t know what I’m looking at here.

  • Maybe this isn’t real and this places just happen to preserve textiles better
  • Longstanding trade or migration routes between North Africa, the Middle East, and Eastern Europe?
  • Culture of innovation in fiber?
  • Maybe fiber is more abundant in these areas, and thus there was more affordance for experimenting. (See below.)

It might be a coincidence. But it’s an odd coincidence, if so.

Why did it take so long for someone to invent knitting?

This is the question I set out to answer in the initial post, but then it turned into a whole thing and I don’t think I ever actually answered my question. Very, very speculatively: I think knitting is just so complicated that it took thousands of years, and an environment rich in fiber innovation, for someone to invent and make use of the series of steps that is knitting.

Take this next argument with a saltshaker, but: my intuitions back this up. I have a good visual imagination. I can sort of “get” how a slip knot works. I get sewing. I understand weaving, I can boil it down in my mind to its constituents.

There are birds that do a form of sewing and a form of weaving. I don’t want to imply that if an animal can figure it out, it’s clearly obvious – I imagine I’d have a lot of trouble walking if I were thrown into the body of a centipede, and chimpanzees can drastically outperform humans on certain cognitive tasks – but I think, again, it’s evidence that it’s a simpler task in some sense.

Same with sprang. It’s not a process I’m familiar with, but watching Sally Pointer do it on a very primitive loom, I can see understand it and could probably do it now. Naalbinding – well, it’s knots, and given a needle and knowing how to make a knot, I think it’s pretty straightforward to tie a bunch of knots on top of each other to make fabric out of it.

But I’ve been knitting for quite a while now and have finished many projects, and I still can’t say I totally get how knitting works. I know there’s a series of interconnected loops, but how exactly they don’t fall apart? How the starting string turns into the final project? It’s not in my head. I only know the steps.

I think that if you erased my memory and handed me some simple tools, especially a loom, I could figure out how to make cloth by weaving. I think there’s also a good chance I could figure out sprang, and naalbinding. But I think that if you handed me knitting needles and string – even if you told me I was trying to get fabric made from a bunch of loops that are looped into each other – I’m not sure I would get to knitting.

(I do feel like I might have a shot at figuring out crochet, though, which is supposedly younger than any of these anyway, so maybe this whole line of thinking means nothing.)

Idle hands as the mother of invention?

Why do we innovate? Is necessity the mother of invention?

This whole story suggests not – or at least, that’s not the whole story. We have the first knit structures in belts (already existed in other forms) and decorative silver wire (strictly ornamental.) We have knit socks from Egypt, not a place known for demanding warm foot protection. What gives?

Elizabeth Wayland Barber says this isn’t just knitting – she points to the spinning jenny and the power loom, both innovations in yarn production in general, that were invented recently by men despite thousands of previous years of women producing yarn. In Women’s Work: The First 20,000 Years, she writes:

“Women of all but the top social and economic classes were so busy just trying to get through what had to be done each day that they didn’t have excess time or materials to experiment with new ways of doing things.”

This speculates a kind of different mechanism of invention – sure, you need a reason to come up with or at least follow up on a discovery, but you also need the space to play. 90% of everything is crap, you need to be really sure that you can throw away (or unravel, or afford the time to re-make) 900 crappy garments before you hit upon the sock.

Bill Bryson, in the introduction to his book At Home, writes about the phenomenon of clergy in the UK in 1700s and 1800s. To become an ordained minister, one needed a university degree, but not in any particular subject, and little ecclesiastical training. Duties were light; most ministers read a sermon out of a prepared book once a week and that was about it. They were paid in tithes from local landowners. Bryson writes:

“Though no one intended it, the effect was to create a class of well-educated, wealthy people who had immense amounts of time on their hands. In conesquence many of them began, quite spontaneously, to do remarkable things. Never in history have a group of people engaged in a broader range of creditable activities for which they were not in any sense actually employed.”

He describes some of the great amount of intellectual work that came out of this class, including not only the aforementioned power loom, but also: scientific descriptions of dinosaurs, the first Icelandic dictionary, Jack Russel terriers, submarines aerial photography, the study of archaeology, Malthusian traps, the telescope that discovered Uranus, werewolf novels, and – courtesy of the original Thomas Bayes – Bayes’ theorem.

I offhandedly posited a random per-person effect in the previous post – each individual has a chance of inventing knitting, so eventually someone will figure it out. There’s no way this can be the whole story. A person in a culture that doesn’t make clothes mostly out of thread, like the traditional Inuit (thread is used to sew clothes, but the clothes are very often sewn out of animal skin rather than woven fabric) seems really unlikely to invent knitting. They wouldn’t have lots of thread about to mess around with. So you need the people to have a degree of familiarity with the materials. You need some spare resources. Some kind of cultural lenience for doing something nonstandard.

…But is that the whole story? The Incan Empire was enormous, with 12,000,000 citizens at its height. They didn’t have a written language. They had the quipu system for recording numbers with knotted string, but they didn’t have a written language. (Their neighbors, the Mayans, did.) Easter Island, between its colonization by humans in 1000 CE and its worse colonization by Europeans in 1700 CE, had a maximum population of maybe 12,000. It’s one of the most remote islands in the world. In isolation from other societies, they did develop a written language, in fact Polynesia’s only native written language.

Color photo of a worn wooden tablet engraved with intricate Rongorongo characters.
One of ~26 surviving pieces of Rongorongo, the undeciphered written script of Easter Island. This is Text R, the “Small Washington tablet”. Photo from the Smithsonian Institution. (Image rotated to correspond with the correct reading order, as a courtesy to any Rongorongo readers in my audience. Also, if there are any Rongorongo readers in my audience, please reach out. How are you doing that?!)
A black and white photo of the same tablet. The lines of characters are labelled (e.g. Line 1, Line 2) and the  symbols are easier to see. Some look like stylized humans, animals, and plants.
The same tablet with the symbols slightly clearer. Image found on kohaumoto.org, a very cool Rongorongo resource.

I don’t know what to do with that.

Still. My rough model is:

A businessy chart labelled "Will a specific group make a specific innovation?" There are three groups of factors feeding into each other. First is Person Factors, with a picture of a person in a power wheelchair: Consists of [number of people] times [degree of familiarity with art]. Spare resources (material, time). And cultural support for innovation. Second is Discovery Factors, with a picture of a microscope: Consists of how hard the idea "is to have", benefits from discovery, and [technology required] - [existing technology]. ("Existing technology" in blue because that's technically a person factor.) Third is Special Sauce, with a picture of a wizard. Consists of: Survivorship Bias and The Easter Island Factor (???)

The concept of this chart amused me way too much not to put it in here. Sorry.

(“Survivorship bias” meaning: I think it’s safe to say that if your culture never developed (or lost) the art of sewing, the culture might well have died off. Manipulating thread and cloth is just so useful! Same with hunting, or fishing for a small island culture, etc.)

…What do you mean Loopholes has articles about the history of the autoharp?! My Renaissance man aspirations! Help!


Delightful: A collection of 1900’s forgeries of the Paracas textile. They’re crocheted rather than naalbound.

1 (Uh, usually. You can finger weave with just a stick or two to anchor some yarn to but it wasn’t widespread, possibly because it’s hard to make the cloth very wide.)

2 I had this whole thing ready to go about how a knit belt was ridiculous because a knit tube isn’t actually very stretchy “vertically” (or “warpwise”), and most of its stretch is “horizontal” (or “weftwise”). But then I grabbed a knit tube (fingerless glove) in my environment and measured it at rest and stretched, and it stretched about as far both ways. So I’m forced to consider that a knit belt might be reasonable thing to make for its stretchiness. Empiricism: try it yourself!

3 Fun fact: Plant-based fibers (cotton, linen, etc) are mostly made of carbohydrates. Animal-based fibers (silk, wool, alpaca, etc) and leather are mostly made of protein. Fens are wetlands that are alkaline and bogs are acidic. Carbohydrates decay in acidic bogs but are well-preserved in alkaline fens. Proteins dissolve in alkaline environments fens but last in acidic bogs. So it’s easier to find preserved animal material or fibers in bogs and preserved plant material or fibers in fens.


Cross-posted to LessWrong.