A trippy kaleidescope-type image of a scientist writing something down.

Learn to write well BEFORE you have something worth saying

I’ve been reading a lot of trip reports lately. Trip reports are accounts people write about their experiences doing drugs, for the benefit of other people who might do those same drugs. I don’t take illegal drugs myself, but I like learning about other people’s intense experiences, and trip reports are little peeks into the extremes of human consciousness. 

In some of these, people are really trying to communicate the power and revelation they had on a trip. They’re trying to share what might be the most meaningful experience of their entire life. 

Here’s another thing: almost all trip reports are kind of mediocre writing.

This is wildly judgmental but I stand by it. Here are some common things you see in them:

  • Focusing on details specific to the situation that don’t matter to the reader. (Lengthy accounting of logistics, who the person was with at what time even when they’re not mentioned again, etc.) 
  • Sort of basic descriptions of phenomena and emotions: “I was very scared”. “I couldn’t stop thinking about it.”
  • Cliches: “I was glad to be alive.” “It felt like I was in hell.” “It was an epic struggle.”
  • Insights described in sort of classically-high-sounding abstractions. “I realized that the universe is made of love.” “Everything was nothing and time didn’t exist.” These statements are not explained, even if they clearly still mean a lot to the writer, and do not really communicate the force of whatever was going on there.

It’s not, like, a crime to write a mediocre trip report. It’s not necessarily even a problem. They’re not necessarily trying to convince you of anything. A lot of them are just what it says on the tin: recording some stuff that happened. I can’t criticize these for being bland, because that seems like trying to critique a cookbook for being insufficiently whimsical: they’re just sharing information.

(…Though you can still take that as a personal challenge; “is this the best prose it can be?” For instance, How to Cook and Eat in Chinese by Chao Yang Buwei is a really well-written cookbook with a whimsical-yet-practical style. There’s always room to grow.)

But some of these trip reports very much do have an agenda, like “communicating crucial insights received from machine elves” or “convincing you not to take drug X because it will ruin your life”. In these cases, the goal would be better served if the writing were good, and boy howdy, my friends: the writing is not good.

Which is a little counter-intuitive, right? You’d think these intense and mind-blowing experiences would automatically give you rich psychic grist for sharing with others, but it turns out, no, accounts of the sublime and life-altering can still be astonishingly mid.

Now certain readers may be thinking, not unreasonably, “that’s because drug-induced revelations aren’t real revelations. The drug’s effects makes some thoughts feel important – a trip report can’t explain why a particular ‘realization’ is important, because there’s nothing behind it.”

But you know who has something new and important to say AND knows why it’s important? Academic researchers publishing their latest work.

But alas, academic writing is also, too frequently, not good. 

And if good ideas made for good writing, you’d expect scientific literature to be the prime case for it. Academic scientists are experts: they know why they made all the decisions they did, they know what the steps do, they know why their findings are important. But that’s also not enough.

Ignore academic publishing and the scientific process itself, let’s just look at the writing. It’s very dense, denser than it needs to be. It does not start with simple ideas and build up, it’s practically designed to tax the reader. It’s just boring, it’s not pleasant to read. The rationale behind specific methods or statistical tests aren’t explained. (See The Journal of Actually Well-Written Science by Etienne Fortier-Dubois for more critique of the standard scientific style.) There’s a whole career field of explaining academic studies to laypeople, which is also, famously, often misleading and bad.

This is true for a few reasons:

First, there’s a floor of how “approachable” or “easy” you can make technical topics. A lot of jargon serves useful purposes, and what’s the point in a field of expertise if you can’t assume your reader is caught up on at least the basics? A description of synthesizing alkylated estradiol derivatives, or a study on the genome replication method of a particular virus, is simply very difficult to make layperson-accessible.

Second, academic publishing and the scientific edifice as it currently stands encourage uniformity of many aspects of research output, including style and structure. Some places like Seeds of Science are pushing back on this, but they’re in the minority.

But third, and this is what trips up the trip-reporters and the scientists alike, writing well is hard. Explaining complicated or abstract or powerful ideas is really difficult. Just having the insight isn’t enough – you have to communicate it well, and that is its own, separate skill.

A trippy kaleidescope-type image of a scientist writing something down.

I don’t really believe in esoterica or the innately unexplainable. “One day,” wrote Jack Kerouac, “I will find the right words, and they will be simple.” Better communication is possible. There are great descriptions of being zonked out of one’s gourd and there is great, informative, readable science writing.

So here’s my suggestion: Learn to write well before you have something you really need to tell people about. Practice it on its own. Write early and often. Write a variety of different things and borrow techniques from writing you like. And once you have a message you actually need to share, you’ll actually be able to express it.

(A more thorough discussion of how to actually write well is beyond the scope of this blog post – my point here is just that it’s worth improving. if you’re interested, let me know and I might do a follow-up.)


Thank you Kelardry for reviewing a draft of this post.

Support Eukaryote Writes Blog on Patreon.

Crossposted to: [EukaryoteWritesBlog.comSubstackLessWrong]

Drawing of the author in scrubs surrounded by a halo of bacteriophages and orbiting beakers, a cup, a paper form, pills, bacillus bacteria, and Infinite Jest.

I got dysentery so you don’t have to

Drawing of the author wearing hospital scrubs surrounded by a halo of bacteriophages and floating beakers, a form, a cup, pills, bacillus bacteria, and a copy of Infinite Jest.

This summer, I participated in a human challenge trial at the University of Maryland. I spent the days just prior to my 30th birthday sick with shigellosis.

What? Why?

Dysentery is an acute disease in which pathogens attack the intestine. It is most often caused by the bacteria Shigella. It spreads via the fecal-oral route. It requires an astonishingly low number of pathogens to make a person sick – so it spreads quickly, especially in bad hygienic conditions or anywhere water can get tainted with feces.

It kills about 70,000 people a year, 30,000 of whom are children under the age of 5. Almost all of these cases and deaths are among very poor people.

The primary mechanism by which dysentery kills people is dehydration. The person loses fluids to diarrhea and for whatever reason (lack of knowledge, energy, water, etc) cannot regain them sufficiently. Shigella bacteria are increasingly resistant to antibiotics. A disease easily treatable by lots of fluids and antibiotics is becoming more lethal.

Can someone do something?

The deal with human challenge trials

Clinical trials in general are expensive to run but pretty common; clinical trials where you are given the disease – “challenged”, AKA “human challenge trials” – are very rare. The regular way to investigate a possible treatment is to make a study plan, then find people who have the disease and offer to enroll them in the experimental treatment. Challenge trials are less common, but often more valuable for research – shigellosis is a fast-acting disease that is imminently treatable by antibiotics and uncommon in the US. It would be very difficult to test an alternative shigellosis treatment in the US in the conventional way, but it’s a great candidate for challenge trials.

I’d signed up for email alerts on upcoming challenge trials at the nearby University of Maryland, and got one about an upcoming study. It caught my eye that it was for a phage-based treatment. Bacteriophages are really promising antibacterial medicines, not to mention what I’d studied as an undergrad.

Here’s the thing: you really only get good medical research out of human subjects. Also, I could use $4000 and this seemed like a cool way to spend a couple weeks and help out medical research. So I signed up, got a check-in general health appointment, and shortly after, was told I was in. I made plans to spend my 30th birthday in a dysentery ward.

Dysentery: it’s a modern disease

Many of you reading this will know about dysentery from the 1971 simulation game The Oregon Trail (or its later versions). The actual Oregon Trail was a network of trails and the corresponding migration of mostly-white pioneers, moving on foot and on ox-drawn wagon from the eastern US to the western US between 1830 and 1869. About 400,000 people* crossed the Oregon Trail in this period, and a lot of them were on similar trails – a bunch of stressed and malnourished people, traveling in close quarters with their families, stopping and pooping near the same trails and creeks with no regard for water safety – diseases spread very fast in these conditions. From these and other stressors, about 65,000 people died in this 40-year period.

Stated another way, more people die from dysentery now, every year than ever died from any cause on the Oregon Trail. So let’s calm down about the Oregon Trail, okay?

*Lots of people use this 400,000 number but I can’t figure out where it came from and if this is referring to individuals or families – I’ve seen sources indicate it was either. If it was families, it was probably counting the men who were “the pioneers” and then being like “oh and there were women and kids there also, I guess.” But maybe it was individuals? Or maybe someone just made this up? Again, no idea where it came from. You gotta be careful every time anyone tells you a number. It’s so bad out there. The only thing worse than someone telling you a number is when they don’t tell you a number.

Getting ready

A week or so before I went, I’d been pointed to Jake Eberts’s twitter thread. Jake Eberts also participated in a challenge trial for a dysentery vaccine, also I think at UMD and the same Baltimore facility I was at, where he got very sick and went viral for livetweeting the experience. He started a fundraiser for dysentery relief and got a lot of people to sign up for clinical trials themselves, and now he works for 1DaySooner, premier “hey, human challenge trials are cool” advocates.

I read his twitter thread and sent my friends this meme:

(chuckles) I'm in danger.

I brought Infinite Jest, which I was partway through and was a lot more through (but still not done) by the time I was discharged. (I’m writing this while traveling, and in a fit of poor timing I finally finished it on the plane ride in, which means I now have a giant brick of a book to carry around in my suitcase.) My friend Ozy said that Infinite Jest was a really good book for reading in a dysentery ward.

I thought, oh, that’s interesting, you know, a lot of the characters are pretty miserable and living in a controlling institution of some kind. Then I remembered this one passage, where circumstances have forced a character into withdrawing from heroin alone, holed up for days in a public bathroom:

Time began to pass with sharp edges. Its passage in the dark or dim-lit stall was like time being carried by a procession of ants, a gleaming red martial column of those militaristic red Southern-U.S. ants that build hideous tall boiling hills, and each vile gleaming ant wanted a minuscule little portion of Poor Tony’s flesh in compensation as it helped bear time slowly forward down the corridor of true Withdrawal. By the second week in the stall time itself seemed the corridor, lightless at either end. After more time time then ceased to move to be moved or be move-throughable and assumed a shape above and apart, a huge, must-feathered, orange-eyed wingless fowl hunched incontinent atop the stall, with a kind of watchful but deeply uncaring personality that didn’t seem keen on Poor Tony Krause as a person at all, or to wish him well. Not one little bit. It spoke to him from atop the stall, the same things, over and over. They were unrepeatable. Nothing in even Poor Tony’s grim life-experience prepared him for the experience of time with a shape and an odor, squatting; and the worsening physical symptoms were a spree at Bonwit’s compared to time’s black assurances that the symptoms were merely hints, signposts pointing up at a larger, far more dire set of Withdrawal phenomena that hung just overhead by a string that unraveled steadily with the passage of time. It would not keep still and would not end; it changed shape and smell.

I was forced to agree that Infinite Jest was indeed probably a pretty good choice.

Two days until challenge

Checking in, everyone’s bags were checked. I got the impression they really didn’t want some kind of bad outcome where they had to call cops into a ward where everyone was running around with the bloody flux, which, fair enough. They did take away my craft scissors. I didn’t end up knitting so it wasn’t a big deal but like I’m pretty sure I’ve taken those on airplanes before. Okay. 

We were assigned a number (I was just on this side of divinity at No. 107), given a plastic wristband, and shown to our rooms. We were also given two pairs of scrubs which were to be our main clothes on the ward – less risk of ruining hard-to-launder clothes in the more messy phases of the study – though it did mean 15 people having to coordinate laundry every day.

My hospital bed with folded scrubs atop, a cup of coffee, and copy of Infinite Jest set on the adjustable bedside table.
Where I made my stand

The ward was more of a retrofitted office building than a hospital. It consisted of some spaces for nurses and testing, about 6 bedrooms of various sizes (each with their own half-bath), two separate areas with two shower stalls each, a “kitchen” with snacks and where the meals were delivered to, a closet with washer and dryer, and a rec room with couches and a TV and a pool and foosball table.

There were about 16 people on the ward, an even mix of men and women. Most of them were Baltimore locals; many of them had done other trials before. We were fully allowed to socialize – dysentery is, again, infectious through the fecal-oral route, hand sanitizer was stationed all over the place but there wasn’t a huge concern that we’d infect each other or even the nurses.

Life on the ward is very chill. I was worried about being bored, but I’d forgotten that I spend most of my waking hours on the computer anyway, so it really wasn’t a problem. When even my iron gaze faltered and couldn’t stare at the computer anymore, I read Infinite Jest.

Meals were delivered once a day – one cold usually wrap- or sandwich-based meal, one hot breakfast, one hot supper dish, labelled with people’s numbers.

Sample lunch: a sandwich, salad, and roll in plastic packaging, plus a bottle of water.
Sample lunch

They were, like, fine. The caterers made a few interesting choices – for vegetarians such as myself, every sandwich/wrap was some veggies with hummus, and now and then there’d be like breakfast pancakes with a curry-flavored veggie hamburger patty. I would describe the flavor when drenched with table syrup as “weird.”

Like, you can tell the person planning that menu was like “okay, pancakes and bacon… And wait, crap, something with protein for the vegetarians.” But again, I’ve eaten worse for things I’ve actually paid for ingredients for, and I was definitely eating better in terms of variety and volume than I did at home. I’m not complaining.

One day before challenge: the age of phage

This study was sort of an over-time test – ideally the first of a few, where we’d get phages before (unless we were in the control group), during, and after the “challenge” (the shigella) to see if they had any effect at all – if it did, later studies could determine if you could just drink the phage after getting sick, or if it would work best as a prophylactic, or etc. We drank a chalky buffer solution to neutralize stomach acid and give the bacteriophages (and later, the bacteria) a better chance at making it to the intestine.

What do the solutions taste like? Basically all salty fluid with slight mineral nuance, from the buffer. Phages are known to be pretty tasteless so I didn’t expect anything else.

Bacteriophage therapy: sending a cat after mice

A bacteriophage is a virus that infects bacteria. They were discovered shortly after bacteria themselves were really pinned down – microscopes were finally powerful enough to make out bacteria, and visionaries like Robert Koch and Louis Pasteur were pinpointing that these little nothing-pinpricks were in fact the source of diseases. (For more on the discovery of the microbial world, see “Through the Looking Glass and what Zheludev Et Al. (2024) Found There”, my recent piece in Asterisk Magazine.)

In 1917, Félix d’Hérelle found an agent that killed cholera bacteria, which passed through a fine filter, and which could reproduce – a living agent that killed bacteria, but that was itself smaller than a bacteria.

d’Hérelle realized right away this substance which killed bacteria, and which people had apparently been drinking, had potential as medicine. He bred pathogenic bacteria in vats and added solutions, and waited until the cloudy brother of bacteria turned clear – then offered this liquid to sick patients. Many of them, sure enough, recovered. I was (unless I was in the control group) walking in historical footsteps. Dysentery was the first human disease ever treated with phage medicine.

Sending a phage after bacteria is like sending a cat after mice. Phages are small, targeted, well-adapted hunters of specific bacteria. There is no way for them to infect a human cell like a human virus would – they are completely specialized. Phages are already in the body, along with their bacterial hosts – so you’re not introducing a radically new agent – and the immune system tends to play well with them.

Phage are used widely in some parts of the world – the Republic of Georgia and Poland both sell phage over-the-counter, for use in say intestinal conditions or wounds, and have clinics for personalized treatment. In the US, phage therapy is an extremely rare specialty, sometimes even falling under the umbrella of naturopathy. (A phage being a natural bioactive product.)

Why would you use antibiotics instead of phages, or vice versa?

PhagesAntibiotics
Targeted – a phage attacks one species or one strain of bacteria
Easy to find usable new ones
More finnicky (e.g. less stable)
Predator-prey pharmacokinetics

Mostly spread where the bacteria are
Very few side effects
Broad-spectrum

Hard to find usable new ones
Shelf stable
Regular blood-elimination-curve pharmacokinetics
Systemic; enter the bloodstream
Sometimes-serious side effects

What if the bacteria become resistant to the phages too?

Well, that can happen easily – probably even easier than with antibiotics. Cells have been duking it out with viruses since the beginning of life. (Did you know CRISPR-Cas9, now used for gene editing, evolved in nature as a way for bacteria to recognize and cut up phage DNA?)

But the difference is that whereas new antibiotics are very hard to find, there is a nigh-inexhaustible evolutionary font of phages constantly pulling ahead in the arms race. So in short: once a bacteria becomes resistant to your special phage, just find a new phage.

Do they work?

To my knowledge, there aren’t any really gold-standard reviews comparing phages head-on to antibiotics. They are fiddlier than antibiotics, with a specialized body of knowledge for treatment – less stable, have to be introduced to the site directly, much more care in choosing an appropriate treatment.

One small study found a phage treatment comparably effective to antibiotics for Salmonella typhimirium in 36 lab mice.1 Another meta-study compared modern antibiotic studies to 17 studies from the last time human phage research was in vogue in the US, the 1920s-40s, and found that phages were effective treatments – but 4 modern clinical trials suggested phages were not effective.2 A more recent study of personalized phage therapy showed promising results in infections considered “difficult-to-treat”.3 They seem to work best when used with antibiotics.

I’m not doing a full lit review right now. I bet that phage therapy still has promise – more careful formulations and just more research will help. That’s before challenges of commercial rollout, including things like handling FDA approval for a product that must be reformulated regularly.

The elephant in the room is antibiotic resistance – antibiotics usually work extremely well, but increasingly, bacteria can survive them. Antibiotic resistance is, unlike other diseases you might think of that are exacerbated by over-medication, not a condition of privileged countries – lots of Shigella bacteria in developing countries are increasingly antibiotic-resistant.

Even if phages don’t work as well as the magic silver bullet that is antibiotics, they might work well enough to be worth incorporating into our medical toolbox as part of AMR management. And that means developing them now.

The other challenge is of course regulatory – I’m excited that Intralytix, who made the experimental product I did-or-didn’t take, is throwing their hat into the space of human phage medicine, and to see how they handle this.

Day 1 of challenge

On the third day in the ward after a day of baseline and a day of phage (unless we were in the control group), we took another dose of phage (unless we were in the control group), waited a couple of hours, and then drank a glass of shigella. This tasted like baking soda and salt with no particular nuance, nor would I expect nuance; the dose was some 1300 organisms – as in 1300 individual cells of bacteria, count ‘em. A preposterously scant microbial innoculum, even for devoted parasites it often takes on the order of millions of organisms to lodge an infection – but shigella is remarkably tenacious. It would only have taken 10-200. This was overkill – a dose that WILL make you sick, unless you’re protected. All the participants drank.

The waiting game

Shigella has a 24-72 hour incubation period, maybe 12-96 hours on the far ends.

Perhaps owing to quirks of my own psyche, whose origins I’m sure we don’t need to explore here, I find it reassuring to have reference experiences to conveniently benchmark the rest of my life by. If you go skiing, you can ask yourself later, “is this more or less exhilarating than skiing?” If you fall in love once, you can compare future loves to that earlier experience.

A good standard reference point for “shared, resigned dread” is the 72 or so hours in a clinical trial ward after everyone has ingested shigella bacteria along with maybe-a-treatment.

The vibes were ominous. Jovially nervous. Unprecedented gastrointestinal distress may or may not have been coming for me, but if it is, it would be arriving in (on average) 48 hours.

The floor was pretty quiet. The hours ticked by.

Infinite Jest is, by the way, a great book. David Foster Wallace knew how to write a goddamn sentence on purpose.

Let’s learn about Shigella pathogenesis

While I waited, I decided to read up. Shigella bacteria invades the body via the digestive canal, and infects the intestines – both small and large. It releases a toxin that facilitates its infection of other parts of the intestine and its eventual replication. It’s an intracellular pathogen – some bacteria, like all viruses, actually enter the host’s cell and replicate inside there.

Shigella actually prefers to invade the outside (or should I say the inside?) of intestinal cells. But the body is a locked-down system with its own guard force, the immune system, keeping the dirty external environment separate from the sterile inside environment. Shigella in the digestive tract really wants to poke through that line of intestinal cells and get at them from the other side.

How does Shigella get to the outside of the intestinal cell layer?

Wikipedia explains:

Once inside of the colon, S. flexneri can penetrate the epithelium in three ways:
1) The bacterium can alter the tight junctions between the epithelial cells, allowing it to cross into the sub-mucosa.
2) It can penetrate the highly endocytic M cells that are dispersed in the epithelial layer and cross into the sub-mucosa.
3) After reaching the sub-mucosa, the bacteria can be phagocytosed by macrophages and induce apoptosis, cell death. This releases cytokines that recruit polymorphonuclear cells (PMN) to the sub-mucosa. S. flexneri still in the lumen of the colon traverse the epithelial lining as the PMNs cross into the infected area. The influx of PMN cells across the epithelial layer in response to Shigella disrupts the integrity of the epithelium allowing lumenal bacteria to cross into the sub-mucosa in an M-cell independent mechanism.

This is really funny. Okay, imagine there’s a blockade of tightly parked police cars facing you and you and your buddies need to go get to their trunks so you can hide in them. Here are 3 ways to do this:

  1. Push the police cars to the side so you can walk between them
  2. Look for the police cars with the biggest doors, so that you can squeeze through the car and leave through their trunk (or I guess probably just stay in the trunk at that point)
  3. Get yourself and your buddies arrested, then when they send backup police vans to push through the police to arrest all of you, run through the cracks in the blockade that those vans open up. Then go to the trunks of the original cop cars.

And then once you’re inside the car, you can open the doors between the cop cars (they’re sliding doors) and then travel laterally between the cop cars. I love cells.


As a fun side note, Shigella – including the strain I was developing an intimate relationship with, Shigella flexneri – is, taxonomically speaking, a kind of Escherichia coli. Now you may notice from the scientific nomenclature that this is not how this is supposed to work.

When genotyping was developed and applied to some familiar standby kinds of bacteria that microbiology-as-science figured it understood pretty well, researchers learned two surprising new things:

  • E. coli is not a coherent species. Different strains of E. coli – known to have slightly different properties, but thought to be all slight variations on the same basic species – turned out to have only 20% of their genes in common. (Humans and our closest relatives, chimpanzees, have almost all of our genes in common* and still aren’t considered as the same genus.)
  • Shigella is in that umbrella of shared genes – a secret family member known as a taxon in disguise. It’s more similar to many E. colis than some E. colis.

For most species, the procedure at this point would be to throw in the towel and reclassify – Escherichia coli spp. shigella, perhaps. But in this case, shigatoxin-producing Shigella and other pathogenic Escherichia coli have different enough clinical presentations that the distinction is still medically valuable, so accurate nomenclature has bowed its head to practicality. Cool! (Compare and contrast with trees.)

*Wait, don’t people talk about 99% or something? That number is actually about sequence similarity and not related genes – if we have 96% sequence similarity, meaning the exact same genetic code, probably even more of that genome is still in related genes. Genes can code for clearly related proteins/sequences and still not be identical, like they came from a common ancestor and haven’t diverged much but have picked up a few changes along the way. Different E. coli have 80% completely different genes – a human has maaaybe 50 genes that a chimp doesn’t? I didn’t try very hard to find the actual similar metric between them. It’s what I was telling you about numbers. You gotta watch out.

Let’s really learn about Shigella pathogenesis

Some 24 hours in, the first people started going down. Via word of mouth I heard the phrase “Exorcist-style projectile vomiting” used to describe someone in the next room over, a description whose accuracy I fortunately cannot verify. Most people were in their rooms all day anyhow, but the crowd in the kitchen at mealtimes or showing up for morning dosing got thinner.

I really held out. Going to bed at end of the second night, I felt okay, but couldn’t sleep well – nerves, I thought, or the faint distorted unpleasant bodily noises from other parts of the ward. I maybe managed a couple hours of sleep by the wee hours.

48 hours in, I woke up for vitals and dosing at 6 AM and started feeling really faint on the short walk to the next room. I stumbled over to the toilet. Off to the races!

I should be clear in this section that I was in as close to zero long-term danger as you can get with dysentery, which is damn close – this was in a controlled setting with doctors and nurses, monitoring my condition regularly, with a known pathogen with a known cure. In this case, we weren’t expected to languish in indefinite misery – they wanted to see if we got sick and then how sick we got, yes, but only up to a point, at which point they would “call it” – administer regular antibiotics and end our experimental treatment.

All I had to do was let the time pass.

The next few hours were very bad. Surprisingly, the gastrointestinal symptoms were not much of a problem for me – I had them, but it wasn’t much worse than those of regular food poisoning. I didn’t throw up. I just wanted to go back to sleep.

But sleep wasn’t coming.

First was the plague of chills. The institutional cotton blankets did nothing; four of them also did nothing, as if there was no heat to hold in. Freezing, tooth-clattering cold.

Within an hour came the plague of joint pain. It sank in rather quickly and was all in the lower extremities – hips, legs. Any more than one blanket became too heavy to bear having on them, so off they go, freezing cold but they weren’t palpably doing anything anyway. Right? I remembered reading people with chronic pain reporting that sometimes laying down was worse than other positions, and sure enough sitting up was – somehow – mildly better. I situated the adjustable bedside table so that I could slump onto it and maybe even sleep like that, but sleep remained out of reach.

Time wasn’t shitting so much as dragging, by the bones, over rough pavement, every second another six inches, grating, relentless, second after second after second. Time is space in which you are moved forward one way or another. Pain is an active process. 

Around three hours later, the doctor came in and judged that I was done – they were calling it – symptomatically I had reached the Clinical Endpoint and would be treated. I was handed tylenol and antibiotics. 

I’d always thought of tylenol as sort of a second-rate painkiller, probably worth trying if you couldn’t find ibuprofen, but damn if that tylenol didn’t work pretty quickly. As soon as I could I went to sleep for like four hours – which, as usual, if you are in a position of needing four hours of sleep, makes a lot of things better and more manageable once you can swing it.

Out the other side

The antibiotics worked really quickly. Within hours, the fever had vanished and the aches had dwindled to twinges. Within a couple days, even the gastrointestinal situation was back to normal. Other people were harder hit, other people were just starting to get sick – staying vanished in their rooms even after I stuck my head into the kitchen and rec rooms like the first hopeful groundhog of spring – and many had been fine the whole time.

The thing that kills people in dysentery is dehydration and complications thereof. So part of the recovery is collecting and measuring how many fluids were emitted, and then re-administering oral rehydration fluid – a salty liquid served ice-cold – in precise ratios to replace the bodily fluids lost. A human is a series of tubes with attached nervous system and fortunately I was in the company of master plumbers. Once the diarrhea had stopped, I was also able to stop guzzling big plastic cups of what I liked to imagine tasted like arctic seawater. Progress!

Breakfast - french toast in a plastic container and a cup of coffee - illuminated in golden morning light, at a table with a nice view out into the city.
Great view from the rec room.

People who recovered and who never got sick started hanging out in the rec room more, chatting and playing pool. I spent my birthday calling my parents and talking to internet friends. One streamed himself playing a fish-themed video game in my honor. The Baltimoreans inexplicably set off fireworks many nights – maybe the proximity to July 4th? – and this was one of them. Not roadside-stand-ground fireworks, but big aerial fireworks. A fellow subject found ice cream bars in the kitchen freezer and kindly brought me one as a present. Fireworks aside, it was a quiet day.

Apologies for the deception, reader. Technically speaking, the word “dysentery” usually refers to a syndrome, like “psychosis” or “high blood pressure”, which can have multiple causes but which is defined by specific symptoms. The specific symptom of dysentery is bloody diarrhea. I personally did not get this particular symptom – I became sick with shigellosis but, according to a common criteria, did not get dysentery. I’m sorry for clickbaiting you. In my defense, I would have taken it over the joint pain.

Aftermath

Twice a day after antibiotics, we gave the nurses a stool sample – these were sampled and cultured at some lab to determine if shigella was still in there. Two negative samples in a row meant that we were free to go.

9 days after coming in, I was cleared for release. I collected my scissors, and, free of dysentery, was released onto the streets of Baltimore. A year older on paper. Healthy, wrung out, ready for time to keep doing what it does. Hopefully, mostly on kinder terms.

View of Baltimore out a train window.
The train ride home. I see that 75% of these photos have coffee in them. What can I say? I’m from Seattle.

I think that despite my relatively mild case, that I was in the control group. But the reason I think that was because in the whole trial, everyone drank the shigella, and it sure seemed like about half of them didn’t get sick at all.

Pretty goddamn cool, if you ask me.

If you want to have study rigor performed on your body, you can look for clinical trials at clinicaltrials.gov. 1DaySooner advocates for human challenge trials; they have a list of challenge trials that are actively recruiting and you can also sign up for email alerts. Many of them pay money. Consider checking it out.

  1. R. a. N. Acebes et al., “Comparing the Efficacy of Bacteriophages and Antibiotics in Treating Salmonella Enteric Serovar Typhimurium on Streptomycin-Pretreated Mice,” Philippine Journal of Science (Philippines) 150, no. 6a (2021), https://agris.fao.org/search/en/providers/122430/records/6474afaca3fd11e430380e4f. ↩︎
  2. Luigi Marongiu et al., “Reassessment of Historical Clinical Trials Supports the Effectiveness of Phage Therapy,” Clinical Microbiology Reviews 35, no. 4 (September 7, 2022): e00062, https://doi.org/10.1128/cmr.00062-22. ↩︎
  3. Jean-Paul Pirnay et al., “Personalized Bacteriophage Therapy Outcomes for 100 Consecutive Cases: A Multicentre, Multinational, Retrospective Observational Study,” Nature Microbiology 9, no. 6 (June 2024): 1434–53, https://doi.org/10.1038/s41564-024-01705-x. ↩︎

Thank you Grace Neptune, Kelardry, and YumAntimatter for reviewing a draft of this post.

I have a Patreon! Consider supporting my writing by throwing me a few bucks. I’d really appreciate it. I won’t be getting dysentery again (…on purpose) but I have some other good stuff in the works.

Posted on [Eukaryote Writes BlogSubstackLesswrongEA Forum]

Through the Looking Glass, and What Zheludev et al. (2024) Found There. By Georgia Ray. Every time microbiologists develop a new way of looking, they find that there's more to see than they expected.

Eukaryote writes for Asterisk Magazine

See my piece on the history of microbiology and the vast, invisible worlds that come into focus every time we figure out how to look closer:

Through the Looking Glass, and What Zheludev et al. (2024) Found There at Asterisk Magazine


I’ve written for Asterisk before: What I won’t eat, on arriving at an equilibrium on the “it’s bad when animals suffer” vs. “but animal products taste good” challenge.

Recommendation: reports on the search for missing hiker Bill Ewasko

Content warning: About an IRL death.

Today’s post isn’t so much an essay as a recommendation for two bodies of work on the same topic: Tom Mahood’s blog posts and Adam “KarmaFrog1” Marsland’s videos on the 2010 disappearance of Bill Ewasko, who went for a day hike in Joshua Tree National Park and dropped out of contact.

2010 – Bill Ewasko goes missing

2022 – Ewasko’s body found

And then if you’re really interested, there’s a little more info that Adam discusses from the coroner’s report:

(I won’t be fully recounting every aspect of the story. But I’ll give you the pitch and go into some aspects I found interesting. Literally everything interesting here is just recounting their work, go check em out.)

Most ways people die in the wilderness are tragic, accidental, and kind of similar. A person in a remote area gets injured or lost, becomes the other one too, and dies of exposure, a clumsy accident, etc. Most people who die in the wilderness have done something stupid to wind up there. Fewer people die who have NOT done anything glaringly stupid, but it still happens, the same way. Ewasko’s case appears to have been one of these. He was a fit 66-year-old who went for a day hike and never made it back. His story is not particularly unprecedented.

This is also not a triumphant story. Bill Ewasko is dead. Most of these searches were made and reports written months and years after his disappearance. We now know he was alive when Search and Rescue started, but by months out, nobody involved expected to find him alive.

Ewasko was not found alive. In 2022, other hikers finally stumbled onto his remains in a remote area in Joshua Tree National Park; this was, largely, expected to happen eventually.

I recommend these particular stories, when we already know the ending, because they’re stunningly in-depth and well-written fact-driven investigations from two smart technical experts trying to get to the bottom of a very difficult problem. Because of the way things shook out, we get to see this investigation and changes in theories at multiple points: Tom Mahood has been trying to locate Ewasko for years and written various reports after search and search, finding and receiving new evidence, changing his mind, as has Adam, and then we get the main missing piece: finding the body. Adam visits the site and tries to put the pieces together after that.

Mahood and Adam are trying to do something very difficult in a very level-headed fashion. It is tragic but also a case study in inquiry and approaching a question rationally.

(They’re not, like, Rationalist rationalists. One of Mahood’s logs makes note of visiting a couple of coordinates suggested by remote viewers, AKA psychics. But the human mind is vast and full of nuance, and so was the search area, and on literally every other count, I’d love to see you do better.)

Unknowns and the missing persons case

Like I said, nothing mind-boggling happened to Ewasko. But to be clear, by wilderness Search and Rescue standards, Ewasko’s case is interesting for a couple reasons:

First, Ewasko was not expected to be found very far away. He was a 65-year-old on a day hike. But despite an early and continuous search, the body was not found for over a decade.

Second, two days after he failed to make a home-safe call to his partner and was reported missing, a cell tower reported one ping from his cell phone. It wasn’t enough to triangulate his location, but the ping suggested that the phone was on in a radius of approximately 10.6 miles around a specific cell tower. The nearest point of that radius was, however, miles in the opposite direction from the nearest likely trail destination to Ewasko’s car – from where Ewasko ought to be.

A detailed map of Joshua Tree national park. Main points of interest are a few scattered areas all over the park that we know Ewasko was interested in visiting. In the middle of it is a parking lot, Juniper Flats, where Ewasko's car was found. About three miles to the northeast is Quail Mountain, another destination but one that's reachable by the trailhead where the car is - so maybe where he would have gone. But starting a couple miles northeast of THAT is the lower edge of a broad purple ring - this ring represents where a cell tower was pinged 2 days after last contact with Ewasko, suggesting that his phone was at a point within this arc.
The base for a decade of searching. Approximate overlays, info from Mahood and Adam’s work, over Joshua Tree National Park visitor map. 

If you’ve spent much time in wilderness areas in the US, you know that cell coverage is findable but spotty. You’ll often get reception on hills but not in valleys, or suchlike. There’s a margin for error on cell tower pings that depends on location. Also, in this case, Verizon (Ewasko’s carrier) had decent coverage in the area – so it’s kind of surprising, and possibly constrains his route, that his cell phone only would have pinged once.

All of this is very Bayesian: Ewasko’s cellphone was probably turned off for parts of his movement to save battery (especially before he realized he was in danger), maybe there was data that the cell carrier missed, etc, etc. But maybe it suggests certain directions of travel over others. And of course, to have that one signal that did go out, he has to have gotten to somewhere within that radius – again, probably.

How do you look for someone in the wilderness?

Search and rescue – especially if you are looking for something that is no longer actively trying to be found, like a corpse – is very, very arduous. In some ways, Joshua Tree National Park is a pretty convenient location to do search and rescue: there aren’t a lot of trees, the terrain is not insanely steep, you don’t have to deal with river or stream crossings, clues will not be swept away by rain or snow.

But it’s not that simple. The terrain in the area looks like this:

A desert landscape of rolling nested hills with shrubs small and large and a few spiky Joshua Trees dotted over it.
I haven’t been to Joshua Tree myself, but going from Adam’s videos, this is representative of the kind of terrain. || Photo in Joshua Tree National Park by Shane Burkhardt, under a CC BY-NC 2.0 license.

There are rocks, low obstacles, different kinds of terrain, hills and lines of sight, and enough shrubbery to hide a body.

A lot of the terrain looks very similar to other parts of the terrain. Also dotted about are washes made of long stretches of smooth sand, so the landscape is littered with features that look exactly like trails.

Also, environmentally, it’s hot and dry as hell, like “landscape will passively kill you”, and there are rattlesnakes and mountain lions.

When a search and rescue effort starts, they start by outlining the kind of area in which they think the person might plausibly be in. Natural features like cliffs can constrain the trails, as can things like roads, on the grounds that if a lost person found a road, they’d wait by the road. 

You also consider how long it’s been and how much water they have. Bill Ewasko was thought to have three bottles of water on him – under harsh and dry circumstances, that water becomes a leash, you can only go so far with what you have. A person on foot in the desert is limited in both time and distance by the amount of water they carry; once that water runs out, their body will drop in the area those parameters conscribe.

Starting from the closest, most likely places and moving out, searchers first hit up the trails and other clear points of interest. But once they leave the trail? Well, when they can, maybe they go out in an area-covering pattern, like this:

A topographical map overlaid with a GPS track. The GPS path evenly and methodically covers a small area.
Map by Tom Mahood of one of his search expeditions, posted here. The single-dashed line is the cellphone ping radius.

But in practice, that’s not always tenable. Maybe you can really plainly see from one part to another and visually verify there’s nothing there. Maybe this wouldn’t get you enough coverage, if there are obstacles in the way. There are mountains and cliff faces and rocky slopes to contend with. 

Also, it’s pretty hard to cover “all the trails”, since they connect to each other, and someone is really more likely to be near a trail than far away from a trail. Or you might have an idea about how they would have traveled – so do you do more covering-terrain searching, or do you check farther-out trails? In this process, searchers end up making a lot of judgment calls about what to prioritize, way more than you might expect.

You end up taking snaky routes like this:

Another topographical map overlaid with a GPS track. This one has a few overlaid with each other, but the active expedition is a snaking winding route around steep mountains, it is NOT visibly even and methodical.
Map by Tom Mahood, posted here. This is a zoom-in of a pretty small area. Blue was the ground covered in this single expedition, green and red are older search trails, and the long dashed line is the cellphone ping radius.

The initial, official Search and Rescue was called off after about a week, so the efforts Mahood records – most of which he is doing himself, or with some buddies – constitute basically every search that happened. He posts GPS maps too, of that day’s travels overlaid on past travels. You see him work outward, covering hundreds of miles, filling in the blank spots on the map.

Mahood is really good at both being methodical and explaining his reasoning for each expedition he makes, and where he thinks to look. It’s an absolutely fascinating read.

43 expeditions in, in December 2012, Mahood writes this:

A screenshot of a comments and map. The map is a zoomed in area of a BUNCH of GPS trails over time, filling in space all over about 6 square miles on the map where Ewasko might be, much of it overlapping or close to the cellphone ping radius. Up in a hill near the north corner, and just off the edge of where the latest trail goes, there is a purple dot. The text reads: "Comments:
At one point in my travels I reached the northerly summit of the free standing hill northerly of Samuelson’s Rocks. Looking southwest I could see the rugged slopes of Quail Mountain. Looking due west, I could see right into the month of Smith Water Canyon. Toward the north, Quail Wash flowed down toward the homes just beyond the limits of Joshua Tree National Park. I was looking at the entire playing field. I sat for a while, scanned the area with binoculars and thought about things. Knowing where we had been, where the original searchers had been and what we now know the cell phone ping means, I started to develop some new ideas for the next phase in searching. And one way or another, I suspect it will be the final phase. We’ll either find Bill or he’s not findable."
In this image, one map square is ~one mile.

The purple dot is my addition. This is where Ewasko’s body was found in 2022. Mahood wrote this about the same trip where (as far as I can tell) he came the closest any searcher ever got to finding Ewasko. Despite saying it was the end game, Mahood and associates mounted about 50 more trips. Hindsight is heartbreaking.

Making hindsight useful

Hindsight haunts this story in 2024. It’s hard to learn about something like this and not ask “what could have stopped this from happening?”

I found myself thinking, sort of automatically, “no, Ewasko, turn around here, if you turn around here you can still salvage this,” like I was planning some kind of cross-temporal divine intervention. That line of thinking is, clearly, not especially useful.

Maybe the helpful version of this question, or one of them, is: If I were Ewasko, knowing what Ewasko knew, what kind of heuristics should I have used that would have changed the outcome?

The answer is obviously limited by the fact that we don’t know what Ewasko did. There are some specifics, like that he didn’t tell his contacts very specific hiking plans. But he was also planning on a day hike at an established trailhead in a national park an hour outside of Palm Springs. Once he was up the trail, you’ll have to watch Adam’s video and draw your own conclusions (if Adam is even right.)

Mahood writes: “People seldom act randomly, they do what makes sense to them at the time at the specific location they are at.” 

And Adam says: “Most man-made disasters don’t spring from one bad decision but from a series of small, understandable mistakes that build on one another.”

Another question is: If I were the searchers, knowing what the searchers know, what could I have done differently that would have found the body faster?

Knowing how far away the body was found and the kind of terrain covered, I’m still out on this one.

How deep the search got

Moving parts include:

  • Concrete details about Ewasko (Ewasko’s level of fitness, his supplies, down to the particular maps he had, what his activities were earlier in the day)
  • Ewasko’s broader mindset (where he wanted to go at the outset, which tools he used to navigate trails, how much HE knew about the area)
  • Ewasko’s moment-to-moment experience (if he were at a particular location and wanted to hurry home, which route would he take? What if he were tired and low on water and recognized he was in an emergency? What plans might he make?) (This ties into the field of Search and Rescue psychology – people disoriented in the wilderness sometimes make predictable decisions.)
  • Physical terrain (which trails exist and where? How hard is it to get from places to place? What obstacles are there)
  • Weather (how much moonlight was there? How hard was travelling by night? How bad was the daytime heat?)
  • Electromagnetic terrain (where in the park has cell service?)
  • Electromagnetic interpretation (How reliable is one reported cell phone ping? If it is inaccurate, in which ways might it be inaccurate?)
  • Other people’s reports (the very early search was delayed because a ranger apparently just repeatedly didn’t see or failed to notice Ewasko’s car at a trailhead, and there were conflicting reports about which way it was parked. According to Adam and I think Mahood, it now seems now like the car was probably there the entire time it should have been, and it was probably just missed due to… regular human error. But if this is one of the few pieces of evidence you have, and it looks odd – of course it seems very significant.)
  • The search evolving over time (where has been looked in what ways before? And especially as the years pass on – some parts of the terrain are now extremely well-searched, not to mention are regularly used by regular hikers. What are the changes one of these searches missed somewhere, vs. that Ewasko is in a completely new part of the territory?)

I imagine that it would be really hard to choose to carry on with something like this. In this investigation, there was really no new concrete evidence between 2010 and 2022. As Mahood goes on, in each investigation, he adds the tracks to his map. Territory fills in – big swathes of trails, each of them. New models emerge, but by and large the only changing detail is just that you’ve checked some places now, and he’s somewhere you haven’t checked. Probably.

A hostile information environment

Another detail that just makes the work more impressive: Mahood is doing all these investigations mostly on his own, without help and with (as he sees it, although it’s my phrasing) dismissal and limited help from Joshua Tree National Park officials. The reason Mahood posted all of this on the internet was, as he describes it, throwing up his hands and trying to crowd-source it, asking for ideas.

Then after that – The internet has a lot of interested helpful people – I first ran into Mahood’s blog months ago via r/RBI (“Reddit Bureau of Investigation”) or /r/UnsolvedMysteries or one of those years ago. I love OSINT, I think Mahood doing what he did was very cool. But also on those sites and also in other places there are also a lot of out-there wackos. (I know, wackos on the internet. Imagine.) In fact there’s a whole conspiracy theory community called Missing 411 about unexplained disappearances in national parks, which attributes them vaguely to sinister and/or supernatural sources. I think that’s all probably full of shit, though I haven’t tried to analyze it.

Anyway, this case attracted a lot of attention among those types. Like: What if Bill Ewasko didn’t want to be found? What if someone wanted to kill him? What if the cellphone ping was left by as an intentional red herring? You run into words like “staged” or “enforced disappearance” or “something spooky” in this line of thought, so say nothing of run-of-the-mill suicide.

Look, we live in a world where people get kidnapped or killed or go to remote places to kill themselves sometimes, the probability is not zero. Also – and I apologize if this sounds patronizing to searchers, I mean it sympathetically – extended fruitless efforts like this seem like they could get maddening, that alternative explanations that all your assumptions are wrong would start looking really promising. Like you’re weaving this whole dubious story about how Ewasko might have gone down the one canyon without cell reception, climbing up and down hills in baking heat while out of water and injured – or there’s this other theory, waving its hands in the corner, going yeah, OR he’s just not in the park at all, dummy! 

Its apparent simplicity is seductive.

Mahood apparently never put much stock in these sort of alternate models of the situation; Adam thought it was seriously likely for a while. I think it’s fair to say that “Ewasko died hiking in the park, in a regular kind of way” was always the strongest theory, but it’s the easiest fucking thing in the world for me to say that in retrospect, right? I wasn’t out there looking.

Maps and territories

Adam presents a theory about Ewasko’s final course of travel. It’s a solid and kind of stunning explanation that relies on deep familiarity with many of the aforementioned moving factors of the situation, and I do want you to watch the video, so go watch his video. (Adam says Mahood disagrees with him about some of the specifics – Mahood at present hasn’t written more after the body was found, but he might at some point, so keep an eye out.)

I’ll just go talk a little about one aspect of the explanation: Adam suspects Ewasko got initially lost because of a discrepancy between the maps at the time and the on-the-ground trail situation. See, multiple trails run out of the trailhead Ewasko parked at and through the area he was lost in, including official park-made trails and older abandoned Jeep trails. 

Satellite view of parking lot off a road in the wilderness. Out of the parking lot, from the air, we see one faint curving foot trail, and on the other side of the lot, one very clear wide jeep trail.
Example of two trails coming out of the Juniper Flats trailhead where Ewasko’s car was parked. Adam thinks Ewasko could have taken the jeep trail and not even noticed the foot trail. | Adapted from Google Satellite footage from 2024. I made this image but this exact point was first made by Adam in his video.

Adam believes that partly as a result of the 1994 Desert Protection Act, Joshua Tree National Park was trying to promote the use of their own trails, as an ecosystem conservation method. Ewasko believes that Joshua Tree issued guidance to mapmakers to not mark (or de-prioritize marking) trails like the old Jeep roads, and to prioritize marking their official trails, some of which were faint and not well-indicated with signage.

Adam thinks Ewasko left the parking lot on the Jeep road – which, to be fair, runs mostly parallel to the official trail, and rejoins to it later. But he thinks that Ewasko, when returning, realized there was another parallel trail to the south and wanted to take a different route back, causing him to look for an intersection. However, Ewasko was already on the southern trail, and the unlabeled intersection he saw was to another trail that took him deeper into the wilderness – beginning the terrible spiral.

Think of this in terms of Type I and Type II errors. It’s obvious why putting a non-existent trail on a map could be dangerous: you wouldn’t want someone going to a place where they think there is a trail, because they could get lost trying to find it. It’s less obvious why not marking a trail that does exist could be dangerous, but it may well have been in this case, because it will lead people to make other navigational errors.

Endings

The search efforts did not, per se, “work”. Ewasko’s body was not found because of the search effort, but by backpackers who went off-trail to get a better view of the sunset. His body was on a hill, about seven miles northeast of his car, very close to the cellphone ping radius. He was a mile from a road.

In Adam’s final video, on Ewasko’s coroner’s report, Adam explaining that he doesn’t think he will ever learn anything else about Ewasko’s case. Like, that he could be wrong about what he thinks happened or someone may develop a better understanding of the facts, but there will be no new facts. Or at least, he doubts there will be. There’s just nothing left likely to be found.

There are worse endings, but “we have answered some of our questions but not all of them and I think we’ve learned all we are ever going to learn” has to be one of the saddest.

Like I said, I think the searchers made an incredible, thoughtful effort. Sometimes, you have a very hard problem and you can’t solve it. And you try very hard to figure out where you’re wrong and how and what’s going on and what you do is not good enough.

These reports remind me of the wealth of material available on airplane crashes, the root cause analyses done after the fact. Mostly, when people die in maybe-stupid and sad accidents, their deaths do not get detailed investigations, they do not get incident reviews, they do not get root cause analyses.

But it’s nice that sometimes they do.

If you go out into the wilderness, bring plenty of water. Maybe bring a friend. Carry a GPS unit or even a PLB if you might go into risky territory. Carry the 10 essentials. If you get lost, think really carefully before going even deeper into the wilderness and making yourself harder to find. And tell someone where you’re going.


Crossposted to: eukaryotewritesblog.com | Substack | LessWrong

Web-surfing tips for strange times

(h/t Bing’s copilot for the cover images, if you’re seeing them.)

Eukaryote Writes Blog is now syndicating to Substack. I have no plans for paygating content at the time, and new and old posts will continue to be available at EukaryoteWritesBlog.com. Call this an experiment and a reaching-out. If you’re reading this on Substack, hi! Thanks for joining me.

I really don’t like paygating. I feel like if I write something, hypothetically it is of benefit to someone somewhere out there, and why should I deny them the joys of reading it?

But like, I get it. You gotta eat and pay rent. I think I have a really starry-eyed view of what the internet sometimes is and what it still truly could be of a collaborative free information utopia.

But here’s the thing, a lot of people use Substack and I also like the thing where it really facilitates supporting writers with money. I have a lot of beef with aspects of the corporate world, some of it probably not particularly justified but some of it extremely justified, and mostly it comes down to who gets money for what. I really like an environment where people are volunteering to pay writers for things they like reading. Maybe Substack is the route to that free information web utopia. Also, I have to eat, and pay rent. So I figure I’ll give this a go.

Still, this decision made me realize I have some complicated feelings about the modern internet.

Hey, the internet is getting weird these days

Generative AI

Okay, so there’s generative AI, first of all. It’s lousy on Facebook and as text in websites and in image search results. It’s the next iteration of algorithmic horror and it’s only going to get weirder from here on out.

I was doing pretty well on not seeing generic AI-generated images in regular search results for a while, but now they’re cropping up, and sneaking (unmarked) onto extremely AI-averse platforms like Tumblr. It used to be that you could look up pictures of aspic that you could throw into GIMP with the aspect logos from Homestuck and you would call it “claspic”, which is actually a really good and not bad pun and all of your friends would go “why did you make this image”. And in this image search process you realize you also haven’t looked at a lot of pictures of aspic and it’s kind of visually different than jello, but now you see some of these are from Craiyon and are generated and you’re not sure which ones you’ve already looked past that are not truly photos of aspic and you’re not sure what’s real and you’re put off of your dumb pun by an increasingly demon-haunted world, not to mention aspic.

(Actually, I’ve never tried aspic before. Maybe I’ll see if I can get one of my friends to make a vegan aspic for my birthday party. I think it could be upsetting and also tasty and informative and that’s what I’m about, personally. Have you tried aspic? Tell me what you thought of it.)

Search engines

Speaking of search engines, search engines are worse. Results are worse. The podcast Search Engine (which also covers other topics) has a nice episode saying that this is because of the growing hoardes of SEO-gaming low-quality websites and discussing the history of these things, as well as discussing Google’s new LLM-generated results.

I don’t have much to add – I think there is a lot here, I just don’t know it – except that I believe most search engines are also becoming worse at finding strings of text put into quotation marks, and are more likely to search for the words in the text not-as-a-string. Bing was briefly the best that I’d seen of this, Google is the best now but I think all of them have gotten worse. What’s the deal with that?

Censorship

Hey, did you know Youtube flags and demotes videos that have the word “suicide” or “kill yourself”(/etc) in them? Many Youtube video makers get paid by Youtube for views on their videos, but if they’re in that setup, a video can also be “demonetized” meaning the maker doesn’t get paid for views. They can also be less likely to appear in search results – so it’s sort of a gray area between “just letting the content do whatever” and “deleting the content”. I don’t want to quite say that “you can’t say ‘suicide’ in new videos on Youtube”, but it equals out pretty close.

Tiktok has been on this for a while. I was never on Tiktok but it seems pretty rough over there. But Youtube is now on the same train. You don’t have to have the word “suicide” written down in the description or have a viewer flag the video or anything, youtube runs speech-to-text (presumably the same program that provides the automatic closed captions) and will detect if the word “suicide” is said, in the audio track.

Also, people are gonna talk about it. People making pretty sensitive videos or art pieces or just making edgy videos about real life still talk about it.

In fact, here are some of the ways Youtubers get around the way this topic is censored on the platform, which I have ranked from best to worse:

  1. Making sort of a pointing-gun-at-head motion with one’s fingers and pantomiming, while staring at the camera and pointing out the fact that you can’t say the word you mean – if it works for your delivery, it is a shockingly funny lampshade. Must be used sparingly.
  2. Taking their own life, ending themself, etc – Respectable but still grating if you pick up on the fact that they are avoiding the word “suicide”
  3. KYS and variations – Contaminated by somehow becoming an internet insult du jour but gains points for being directly short for the thing you want to say.
  4. Self-termination – Overly formal, not a thing anyone says.
  5. Unalived themselves – Unsalvageably goofy.
  6. Going down the sewer slide – Props for creativity; clear sign that we as a culture cannot be doing this.

So I know people who have attempted suicide and of the ones I have talked to about this phenomena, they fucking hate it. Being like “hey, this huge alienating traumatic experience in your life is actually so bad that we literally cannot allow you to talk about it” tends to be more alienating.

Some things are so big we have to talk about them. If we have to talk about them using the phrase “sewer slide”, I guess we will. But for christ’s sake, people are dying.

Survival tips

I’m reasonably online and I keep running into people who don’t know these. Maybe you’ll find something useful.

I was going to add in a whole thing about how “not all of this will apply to everyone,” but then I thought, why bother. Hey, rule one of taking advice from anyone or anything: sometimes it won’t apply to you! One day I will write the piece that applies to everyone, that enriches everyone’s life by providing them with perfectly new and relevant information. People will walk down the boulevards of the future thinking “hey, remember that one time we were all briefly united in a shining moment by the Ur-blog post that Georgia wrote a while ago.” It’s coming. Any day now. Watch this space.

USE MULTIPLE SEARCH ENGINES

Different web search engines are good at different things. This is surprisingly dynamic – I think a few years ago Bing was notable better at specific text (looking up specific quotes or phrases, in quotes. Good for finding the sources of things.)

I use DuckDuckGo day to day. For more complex queries or finding specific text, I switch to Google, and then if I’m looking for something more specific, I’ll also check Bing. I have heard fantastic things about the subscription search engine Kagi – they have a user-focused and not ad-focused search algorithm and also let you natively do things like just remove entire websites from search results.

 Marginalia is also a fantastic resource. It draws from more text-heavy sources and tends to find you older weirder websites and blogs, at the expense of relatedness.

There are other search engines for more specialized applications, e.g. Google Scholar for research papers.

If you ever use reverse image searches to find the source of images, I check in all of Google Images, Tineye, and Yandex before giving up. They all have somewhat different image banks.

USE FIREFOX AS YOUR BROWSER

Here’s a graph of the most common browsers over time.

According to statcounter, around 2012 Chrome became the most common browser, and in past few years well over 50% of internet usage is from Chrome.
Source: https://gs.statcounter.com

Chrome is a Google browser with Google’s tracking built into it, saving and sending information to Google as you hop around the web. Many of these features can be disabled, but also, the more people use exclusively Chrome, the more control Google can exert over the internet.

For instance, by majorly restricting what kind of browser extensions people can create and use, which is happening soon and is expected to nerf adblockers.

DO NOT GO GENTLE INTO THAT GOOD NIGHT. USE FIREFOX. HELL FUCKING YES.

Please stick it to the man and support a diverse internet ecosystem. Use Firefox. You can customize it in a million ways. It’s privacy focused. (Yes, privacy on the web is still achievable.) It’s run by a nonprofit. It’s really easy to use and works well. It’s for desktop and mobile. Use Firefox.

(I also have a Chrome-derived backup browser, Brave, on my PC for the odd website that is completely broken either by Firefox or by my many add-ons and I don’t want to troubleshoot it. I don’t use it often! Or when I want to use Google’s auto-translation tools, which are epic – and Google’s are better than what I’ve found conveniently on Firefox. You can have two browsers. Nobody can stop you. But make one of them Firefox.)

READ BLOGS? GET AN RSS READER

I’ve heard from a few savvy people that they like the convenience of Substack blogs for keeping track of updates, and I was like – wait, don’t you have an RSS reader? Google didn’t have a monopoly on the RSS reader! The RSS reader lives on!

What it is: A lot of internet content published serially – blog posts, but other things too – has an RSS feed, which is a way of tagging the content so you can feed it into a program that will link to updates automatically. An RSS reader is a program that stores a list of RSS feeds, and when you use it, it goes and checks for new additions to those feeds, and brings them back to you. It’ll keep track of which ones you’ve clicked on already and not show you them again.

This means you can keep track of many sources: Substacks, blogs on any other platform, podcasts, news outlets, webcomics, etc. Most good blogs are NOT on substack. That’s not a knock on substack, that’s just numbers. If substack is your only way of reading blogs you are missing out on vast swathes of the blogosphere.

I use Feedly, which has multi-device support, so I can have the same feed on both my phone and laptop.

If you want to run your own server for it, I hear good things about Tiny Tiny RSS.

There are a million more, and your options get wider if you only need to use it on one device. Look it up.

FIND SOME PEOPLE YOU TRUST.

If you find yourself looking up the same kinds of things a lot, look for experts, and go seek their opinion first.

This doesn’t have to only be for like hardcore research or current events or such. My role in my group house for the past some years has been “recluse who is pretty decent at home repairs”. Here is my secret: every time I run into a household problem I don’t immediately know how to solve, I aggressively look it up. 

In this example, Wikihow is a great ally. Things like Better Home and Gardens or Martha Stewart Living are also fairly known sources. If nothing else, I just try to look for something that was written by an expert and not a content mill or, god forbid, an LLM.

Sometimes your trusted source should be offline. There are definitely good recipe sites out there, but also if you really can’t stand the state of recipe search results, get a cookbook. I’m told experts write books on other subjects too. Investigate this. Report back to me.

PAY FOR THINGS YOU LIKE TO INCENTIVIZE THEIR EXISTENCE.

But if you have the money for the creators and resources of your favorite tools or stories or what have you, it’ll help it stay around. Your support won’t always be enough to save a project you love from being too much work for its creator to keep up with. But it’s gonna fucking help.

Hey –


If you don’t like Substack but want to support the blog, I am still on Patreon. But I kind of like what Substack’s made happen, and also many cool cats have made their way to it.

That said, here are some minor beefs with Substack as a host:

  1. I want to be able to customize my blog visually. There are very few options for doing this. The existing layout isn’t bad, and I’m sure it was carefully designed. And this gripe may sound trivial. But this is my site, and I think we lose something by homogenizing ourselves in a medium (internet) that is for looking. If I want to tank my readership by putting an obnoxious repeating grid of jpeg lobsters as my background, that’s my god-given right.

    (I do actually have plans to learn enough html to swap my WordPress site over to a self-hosted self-designed website, I just have to, like, get good enough with HTML and CSS and especially CSS to get Gwern’s nice sidenotes and hosting and how to do comments. It’s gonna happen, though. Any day now.)
  2. I don’t like that I can only put other substack publications in the “recommendations” sideroll. It feels insular and social-network-y and a lot of my favorite publications aren’t on substack. I’ll recommend you a few the manual way now:

For your experience of Eukaryote Writes Blog, I think the major theoretical downside of this syndication is splitting the comments section. Someone who sees the post on WordPress and leaves a comment there means that the person reading Substack won’t see it. What if there’s a good discussion somewhere?

But I already crosspost many of my posts to Lesswrong and usually if there’s any substantial conversation, it tends to happen there, not on the WordPress. Also sometimes my posts get posted on, like, Hacker News – which is awesome – and there are a bunch of comments there that I sometimes read when I happen to notice a post there but mostly I don’t. So this is just one more. I’ll see a comment for sure on LessWrong, Substack, or WordPress.

Anyway, glad to be here! Thanks for reading my stuff. Let me know if I get anything wrong. Download Firefox. On to more and better and stranger things.

Carl Sagan, nuking the moon, and not nuking the moon

In 1957, Nobel laureate microbiologist Joshua Lederberg and biostatician J. B. S. Haldane sat down together imagined what would happened if the USSR decided to explode a nuclear weapon on the moon.

The Cold War was on, Sputnik had recently been launched, and the 40th anniversary of the Bolshevik Revolution was coming up – a good time for an awe-inspiring political statement. Maybe they read a recent United Press article about the rumored USSR plans. Nuking the moon would make a powerful political statement on earth, but the radiation and disruption could permanently harm scientific research on the moon.

What Lederberg and Haldane did not know was that they were onto something – by the next year, the USSR really investigated the possibility of dropping a nuke on the moon. They called it “Project E-4,” one of a series of possible lunar missions.

What Lederberg and Haldane definitely did not know was that that same next year, 1958, the US would also study the idea of nuking the moon. They called it “Project A119” and the Air Force commissioned research on it from Leonard Reiffel, a regular military collaborator and physicist at the University of Illinois. He worked with several other scientists, including a University of Chicago grad student named Carl Sagan.

“Why would anyone think it was a good idea to nuke the moon?”

That’s a great question. Most of us go about our lives comforted by the thought “I would never drop a nuclear weapon on the moon.” The truth is that given a lot of power, a nuclear weapon, and a lot of extremely specific circumstances, we too might find ourselves thinking “I should nuke the moon.”

Reasons to nuke the moon

During the Cold War, dropping a nuclear weapon on the moon would show that you had the rocketry needed to aim a nuclear weapon precisely at long distances. It would show off your spacefaring capability. A visible show could reassure your own side and frighten your enemies.

It could do the same things for public opinion that putting a man on the moon ultimately did. But it’s easier and cheaper:

  • As of the dawn of ICBMs you already have long-distance rockets designed to hold nuclear weapons
  • Nuclear weapons do not require “breathable atmosphere” or “water”
  • You do not have to bring the nuclear weapon safely back from the moon.

There’s not a lot of English-language information online about the USSR E-4 program to nuke the moon. The main reason they cite is wanting to prove that USSR rockets could hit the moon.3 The nuclear weapon attached wasn’t even the main point! That explosion would just be the convenient visual proof.

They probably had more reasons, or at least more nuance to that one reason – again, there’s not a lot of information accessible to me.* We have more information on the US plan, which was declassified in 1990, and probably some of the motivations for the US plan were also considered by the USSR for theirs.

  • Military
  • Scare USSR
  • Demonstrate nuclear deterrent1
    • Results would be educational for doing space warfare in the future2
  • Political
    • Reassure US people of US space capabilities (which were in doubt after the USSR launched Sputnik)
      • More specifically, that we have a nuclear deterrent1
    • “A demonstration of advanced technological capability”2
  • Scientific (they were going to send up batteries of instruments somewhat before the nuking, stationed at distances from the nuke site)
    • Determine thermal conductivity from measuring rate of cooling (post-nuking) (especially of below-dust moon material)
    • Understand moon seismology better via via seismograph-type readings from various points at distance from the explosion
      • And especially get some sense of the physical properties of the core of the moon2
MANY PROBLEMS, ONE SOLUTION: BLOW UP THE MOON
As stated by this now-unavailable A Softer World merch shirt design. Hey, Joey Comeau and Emily Horne, if you read this, bring back this t-shirt! I will buy it.

Reasons to not nuke the moon

In the USSR, Aleksandr Zheleznyakov, a Russian rocket engineer, explained some reasons the USSR did not go forward with their project:

  • Nuke might miss the moon
    • and fall back to earth, where it would detonate, because of the planned design which would explode upon impact
      • in the USSR
      • in the non-USSR (causing international incident)
    • and circle sadly around the sun forever
  • You would have to tell foreign observatories to watch the moon at a specific time and place
    • And… they didn’t know how to diplomatically do that? Or how to contact them?

The US has less information. While they were not necessarily using the same sea-mine style detonation system that the planned USSR moon-nuke would have3, they were still concerned about a failed launch resulting in not just a loose rocket but a loose nuclear weapon crashing to earth.2

(I mean, not that that’s never happened before.)

Even in his commissioned report exploring the feasibility, Leonard Reiffel and his team clearly did not want to nuke the moon. They outline several reasons this would be bad news for science:

  • Environmental disturbances
  • Permanently disrupting possible organisms and ecosystems
    • In maybe the strongest language in the piece, they describe this as “an unparalleled scientific disaster”
  • Radiological contamination
    • There are some interesting things to be done with detecting subtle moon radiation – effects of cosmic rays hitting it, detecting a magnetosphere, various things like the age of the moon. Nuking the moon would easily spread radiation all over it. It wouldn’t ruin our ability to study this, especially if we had some baseline instrument readings up there first, but it wouldn’t help either.
  • To achieve the scientific objective of understanding moon seismology, we could also just put detectors on the moon and wait. If we needed more force, we could just hit the moon with rockets, or wait for meteor impacts.

I would also like to posit that nuking the moon is kind of an “are we the baddies?” moment, and maybe someone realized that somewhere in there.

Please don't do that :(

Afterwards

That afternoon they imagined the USSR nuking the moon, Lederberg and Haldane ran the numbers and guessed that a nuclear explosion on the moon would be visible from earth. So the USSR’s incentive was there. They couldn’t do much about that but they figured this would be politically feasible, and that this was frightening because such a contamination would disrupt and scatter debris all over the unexplored surface of the moon – the closest and richest site for space research, a whole mini-planet of celestial material that had not passed through the destructive gauntlet of earth’s atmosphere (as meteors do, the force of reentry blasting away temperature-sensitive and delicate structures).

Lederberg couldn’t stop the USSR from nuking the moon. But early in the space age, he began lobbying for avoiding contaminating outer space. He pushed for a research-based approach and international cooperation, back when cooperating with the USSR was not generally on the table. His interest and scientific clout lead colleagues to take this seriously. We still do this – we still sanitize outgoing spacecraft so that hardy Earth organisms will (hopefully) not colonize other planets.

A rocket taking earth organisms into outer space is forward contamination.

Lederberg then took some further steps and realized that if there was a chance Earth organisms could disrupt or colonize Moon life, there was a smaller but deadlier chance that Moon organisms could disrupt or colonize Earth life.

A rocket carrying alien organisms from other planets to earth is back contamination.

He realized that in returning space material to earth, we should proceed very, very cautiously until we can prove that it is lifeless. His efforts were instrumental in causing the Apollo program to have an extensive biosecurity and contamination-reduction program. That program is its own absolutely fascinating story.

Early on, a promising young astrophysicist joined Lederberg in A) pioneering the field of astrobiology and B) raising awareness of space contamination – former A119 contributor and future space advocate Carl Sagan.

Here’s what I think happened: a PhD student fascinated with space works on secret project that he’d worked on with his PhD advisor on nuking the moon. He assists with this work, finding it plausible, and is horrified for the future of space research. Stumbling out of this secret program, he learns about a renowned scientist (Joshua Lederberg) calling loudly for care in space contamination.

Sagan perhaps learns, upon further interactions, that Lederberg came to this fear after considering the idea that our enemies would detonate a nuclear bomb on the moon as a political show.

Why, yes, Sagan thinks. What if someone were foolish enough to detonate a nuclear bomb on the moon? What absolute madmen would do that? Imagine that. Well, it would be terrible for space research. Let’s try and stop anybody from ever doing that that.

A panel from Homestuck of Dave blasting off into space on a jetpack, with Carl Sagan's face imposed over it. Captioned "THIS IS STUPID"
Artist’s rendition. || Apologies to, inexplicably, both Homestuck and Carl Sagan.

And if it helps, he made it! Over fifty years later and nobody thinks about nuking the moon very often anymore. Good job, Sagan.

This is just speculation. But I think it’s plausible.

If you like my work and want to help me out, consider checking out my Patreon! Thanks.

References

* We have, like, the personal website of a USSR rocket scientist – reference 3 below – which is pretty good.

But then we also have an interview that might have been done by journalist Adam Tanner with Russian rocket scientist Boris Chertok, and published by Reuters in 1999. I found this on an archived page from the Independent Online, a paper that syndicated with Reuters, where it was uploaded in 2012. I emailed Reuters and they did not have the interview in their archives, but they did have a photograph taken of Chertok from that day, so I’m wondering if they published the article but simply didn’t properly archive it later, and if the Independent Online is the syndicated publication that digitized this piece. (And then later deleted it, since only the Internet Archived copy exists now.) I sent a message to who I believe is the same Adam Tanner who would have done this interviewee, but haven’t gotten a response. If you have any way of verifying this piece, please reach out.

1: Associated Press, as found in the LA Times Archive, “U.S. Weighed A-Blast on Moon in 1950s.” 2008 May 18. https://www.latimes.com/archives/la-xpm-2000-may-18-mn-31395-story.html

2. Project A119, “A Study of Lunar Research Flights”, 1959 June 15. Declassified report: https://archive.org/details/DTIC_AD0425380

This is an extraordinary piece to read. I don’t think I’ve ever read a report where a scientist so earnestly explores a proposal and tries to solve various technical questions around it, and clearly does not want the proposal to go forward. For instance:

It is not certain how much seismic energy will be coupled into the
moon by an explosion near its surface,
hence one may develop an argument
that a large explosion would help ensure success of a first seismic experiment. On the other hand, if one wished to proceed at a more leisurely pace, seismographs could be emplaced upon the moon and the nature of possible interferences determined before selection of the explosive device. Such a course would appear to be the obvious one to pursue from a purely scientific viewpoint.

3. Aleksandr Zheleznyakov, translated by Sven Grahm, updated 1999 or so. “The E-4 project – exploding a nuclear bomb on the Moon.” http://www.svengrahn.pp.se/histind/E3/E3orig.htm

Crossposted to LessWrong.

Internet Harvest (2024, 1)

Internet Harvest is a selection of the most succulent links on the internet that I’ve recently plucked from its fruitful boughs. Feel free to discuss the links in the comments.

Biosecurity

US COVID and flu website + hotline for getting prescribed paxlovid, for free, for anyone with a positive COVID test risk factors.

Register now to access free virtual care and treatment for COVID-19 and Flu, 24 hours a day, 7 days a week. Sign up anytime, whether you are sick or not.

https://www.test2treat.org/

I unfortunately had cause to use this recently, and I was struck by how easy it was – as well as the fact that I did not have to talk to anyone via phone or video call.

(It was an option, and they indicated at a couple points that a medical professional might call me if they had questions, so you should be prepared, but in my case they didn’t.) The whole thing including getting the prescription was handled over text digitally. This is fantastic.

First fatal case of alaskapox, a novel orthopox virus, in an immunocompromised patient. Orthopox is the virus group that includes smallpox, monkeypox, and cowpox. Orthopox was discovered in 2015 and seems to be spread by rodents. There have been seven total human cases so far.

The University of Minnesota Center for Infectious Disease Research and Policy (CIDRAP) has opened a Chronic Wasting Disease (CWD) Contingency Planning Project – a group of experts that are planning for the possibility of CWD spilling over into people.

I think this is a deeply important kind of project to be doing. You see this in some places in some fashion – for instance, there’s a lot of effort and money spent understanding and tracking and controlling avian influenza (a strain which has a high mortality in humans but isn’t infectious between humans, just birds – for now.) But often, this kind of proactive pandemic prevention work isn’t done, even when the evidence is there.

(I wrote about the possibility of chronic wasting disease spilling into humans a few months ago. I ended up supposing that it was possible, but looking at the infection risk posed by another spilled-over prion disease from a more common animal, BSE, it seems like the absolute risk from prion diseases that can infect humans is extremely low. I don’t think people on this project would necessarily disagree with that, but there are a lot of unknowns and plenty of reasons to take even a low risk of a highly lethal disease spillover very cautiously. Still, I’ll have to read up on it, there may well be a higher risk than I assumed.)

Related, first noticed cases of Alzheimer’s disease transmitted between people (in patients injected with human-derived human growth hormone, decades later). In a past prion post, I wrote: “Meanwhile, Alzheimer’s disease might be slightly infectious- if you take brain extracts from people who died of Alzheimer’s, and inject them into monkey’s brains, the monkeys develop spongy brain tissue that suggests that the prions are replicating. This technically suggests that the Alzheimer’s amyloids are infectious, even if that would never happen in nature.” Well, it didn’t happen naturally, but I guess it did happen. (h/t Scott at Astral Codex Ten.)

The design history of the biohazard symbol.

Other biology

“Obelisks” are potentially a completely new kind of tiny microorganism, identified from metagenomic RNA sequencing.

One of the best stories in a scientific paper is Ants trapped for years in an old bunker; survival by cannibalism and eventual escape” by Rutkowski et al, 2016.

First of all: the discovery of ants falling into a pipe that lead to a sealed bunker in Poland. Once inside, the ants couldn’t climb back out. There were no plants or other life in the bunker, so the ants survived on other organisms that that fell into the bunker, including eating their own dead (which they wouldn’t normally do, but if you’re in a tight spot, like an unused former nuclear weapons bunker, calories is calories.)

Second: after studying how they lived, the scientists tried transporting a small group of Bunker Ants to the surface to make sure they wouldn’t immediately behave in some kind of abnormal destructive way toward surface ants.

Then, when they didn’t, the scientists – in what I see as a breathtaking act of compassion – installed a plank into the pipe, so that the Bunker Ants could climb out of the bunker and be on the surface again.

Flash photo taken in a small groaty bunker room. In the middle is two new planks nailed together to make a bridge extending from the dirty floor of the bunker to a hole in the ceiling.
God shows up and apologizes for not noticing us sooner and says she’ll have the angels install one of these in the sky. [Image: Rutkowski et al 2019]

NEW DEEP SEA ANIMALS LOCATED. Listen. I’ve written about this before – I know so much about the weird little women of the deep ocean and still every time I learn some more there are NEW STRANGER WOMEN DOWN THERE. This is also true of all of humanity learning about the deep ocean I guess. You simply cannot have a beat on this place.

Other bad things

A Hong Kong finance worker joined a multi-person video call where all his colleagues’ videos and their voices were deepfakes. It was a scam and the worker was tricked into transferring them millions of dollars. So that’s a thing that can happen now!

Bellingcat’s investigation into a tugboat spilling oil off the coast of Tobago. (The first piece is linked, there have been more updates since.) I love Bellingcat; I’ve talked about this before. My reaction to keeping up with this series is equal parts “those wizards have done it again” and “there be some specific-ass websites in this world.”

…But when there’s not detailed public-facing information about something, you can make your own, as shown by volunteers tracking ICE deportations by setting up CCTV facing Boeing Field in Seattle and showing up weekly to watch the feed and count how many chained detainees are boarded onto planes. This is laudable dedication.

Do you have an off-brand video doorbell? Get rid of it! They’re incredibly insecure. They aren’t even encrypted. If you have an on-brand video doorbell, maybe still get rid of it, or at least switch it to using local storage. At least, Ring has been ending the thing where they make it easy for police to get footage without warrants, but there are other brands that might have different systems and if you ask me it’s pretty bad that they had that in the first place. (H/t Schneier on Security)

You know who is giving police sensitive customer personal information without warrants? Pharmacies! Yikes! (Again, h/t Schneier on Security)

Other interesting things

Mohists as early effective altruists? Ozy at Thing of Things writes about the Mohist philosophy of ancient China. I knew a little bit about this, but learning more it’s even cooler than I thought, and the parallels to modern rationality are surprising.

MyHouse.WAD is a fan-created map for the video game “Doom” that turns promises to be a map of a childhood home and turns into an evocative horror experience. I don’t know anything about Doom, so I’ve only experienced it in the form of youtube videos about the (real! playable!) map. You also don’t need to know anything about Doom to appreciate it. Power Pak’s video “MyHouse.WAD – Inside Doom’s Most Terrifying Mod” is the most popular video and for good reason.

If you liked that, you may also enjoy Spazmatic Banana’s “doom nerd blindly experiences myhouse.wad (and loses his mind)” (exactly what it sounds like) as well as DavidXNewton’s “The Machniations of myhouse.wad (How it works)” series (which, as it sounds like, explains how the map works. Again, well explained if you do not know a thing about Doom modding.)

The earliest ARG began in the 1980s at the very beginning of the internet age and is based around a supposed research project that made a dimensional rift in the (real) ghost town of Ong’s Hat, New Jersey.

The world’s largest terrestrial vehicle is the Bagger 293, an otherworldly-looking machine that scrapes up topsoil for digging mines.

A colossal bucket-wheel excavator device. It looks kind of like a big shipping crane with a circular sawblade made of excavator buckets all frankensteined together. Some tiny people indicate the scale.
There’s debate to be had on the virtues or lack thereof of open pit mining, but I think we can all agree: they made a really big machine about it.

A cool piece on the woman who won the “Red Lantern” award for coming in last in the 2022 Iditarod. (H/t Briar.)

Hilariously, she wrote on Twitter:

A short story: The Mother of All Squid Builds a Library. (H/t Ozy.)

Kelsey Piper’s piece on regulations and why it’s good that the FAA lets parents on airplanes carry babies in their lap, even though this is known to be less safe in the event of plane accidents than requiring babies to have their own seats.

A 1500s illustration of three Aztec people with fancy food dishes in front of them.

Book review: Cuisine and Empire

[Header: Illustration of meal in 1500s Mexico from the Florentine Codex.]

People began cooking our food maybe two million years ago and have not stopped since. Cooking is almost a cultural universal. Bits of raw fruit or leaves or flesh are a rare occasional treat or garnish – we prefer our meals transformed. There are other millennia-old procedures we do to make raw ingredients into cooking: separating parts, drying, soaking, slicing, grinding, freezing, fermenting. We do all of this for good reason: Cooking makes food more calorically efficient and less dangerous. Other techniques contribute to this, or help preserve food over time. Also, it tastes good.

Cuisine and Empire by Rachel Laudan is an overview of human history by major cuisines – the kind of things people cooked and ate. It is not trying to be a history of cultures, agriculture, or nutrition, although it touches on all of these things incidentally, as well as some histories of things you might not expect, like identity and technology and philosophy.

Grains (plant seeds) and roots were the staples of most cuisines. They’re relatively calorically dense, storeable, and grow within a season.

  • Remote islands really had to make do with whatever early colonists brought with them. Not only did pre-Columbian Hawaii not have metal, they didn’t have clay to make pots with! They cooked stuff in pits.

Running in the background throughout a lot of this is the clock of domestication – with enough time and enough breeding you can make some really naturally-digestible varieties out of something you’d initially have to process to within an inch of its life. It takes time, quantity, and ideally knowledge and the ability to experiment with different strains to get better breeds.

Potatoes came out of the Andes and were eaten alongside quinoa. Early potato cuisines didn’t seem to eat a lot of whole or cut-up potatoes – they processed the shit out of them, chopping, drying or freeze-drying them, soaking them, reconstituting them. They had to do a lot of these because the potatoes weren’t as consumer-friendly as modern breeds – less digestible composition, more phytotoxins, etc.

As cities and societies caught on, so did wealth. Wealthy people all around the world started making “high cuisines” of highly-processed, calorically dense, tasty, rare, and fancifully prepared ingredients. Meat and oil and sweeteners and spices and alcohol and sauces. Palace cooks came together and developed elaborate philosophical and nutritional theories to declare what was good to eat.

Things people nigh-universally like to eat:

  • Salt
  • Fat
  • Sugar
  • Starch
  • Sauces
  • Finely-ground or processed things
  • A variety of flavors, textures, options, etc
  • Meat
  • Drugs
    • Alcohol
    • Stimulants (chocolate, caffeine, tea, etc)
  • Things they believe are healthy
  • Things they believe are high-class
  • Pure or uncontaminated things (both morally and from, like, lead)

All people like these things, and low cuisines were not devoid of joy, but these properties showed up way more in high cuisines than low cuisines. Low cuisines tended to be a lot of grain or tubers and bits of whatever cooked or pickled vegetables or meat (often wild-caught, like fish or game) could be scrounged up.

In the classic way that oppressive social structures become self-reinforcing, rich people generally thought that rich people were better-off eating this kind of diet – carefully balanced – whereas it wasn’t just necessary, it was good for the poor to eat meager, boring foods. They were physically built for that. Eating a wealthy diet would harm them.

In lots of early civilizations, food and sacrifice of food was an important part of religion. Gods were attracted by offered meals or meat and good smells, and blessed harvests. There were gods of bread and corn and rice.

One thing I appreciate about this book is that it doesn’t just care about the intricate high cuisines, even if they were doing the most cooking, the most philosophizing about cooking, and the most recordkeeping. Laudan does her best to pay at least as much attention to what the 90+% of regular people were eating all of the time.


Here’s a great passage on feasts in Ancient Greece, at the Temple of Zeus in Olympia, at the start of each Olympic games (~400 BCE):

On the altar, ash from years of sacrifice, held together with water from the nearby River Alpheus, towered twenty feet into the air. One by one, a hundred oxen, draped with garlands, raised especially for the event and without marks of the plow, were led to the altar. The priest washed his hands in clear water in special metal vessels, poured out libations of wide, and sprinkled the animals with cold water or with grain to make them shake their heads as if consenting to their death. The onlookers raised their right arms to the altar. Than the priest stunned the lead ox with a blow to the base to the neck, thrust in the knife, and let the blood spill into a bowl held by a second priest. The killing would have gone on all day, even if each act took only five minutes.

Assistants dragged each felled ox to one side to be skinned and butchered. For the assembled crowd, cooks began grilling strips of beef, boiling bones in cauldron, baking barley bannocks, and stacking up amphorae of wine. For the sacrifice, fat and leg and thigh bones rich in life-giving marrow were thrown on a fire of fragrant poplar branches, and the entrails were grilled. Symbolizing union, two or three priests bit together into each length of intestines. The bones whitened and crumbled; the fragrant smoke rose to the god.

Ancient Greek farmers had thin soil and couldn’t do much in the way of deliberate irrigation, so their food supply was more unpredictable than other places.

Country people kept a three-year supply of grain to protect against harvest failure and a four-year supply of oil. 

That’s so much!

That poor soil is also why the olive tree was relied on for oil instead of grains, which had better yields and took way less time to reach producing age. You could grow olive trees in places you couldn’t farm grain. And now we all know and love the oil from this tree. A tree is a wild place to get oil from! Similar story for grapevines.

  • The Spartans really liked this specific pork and blood soup called “black broth”.

This book was a fun read, on top of the cool history. Laudan has a straightforward listful way of describing cuisines that really puts me in the mind of a Redwall or a George R. R. Martin feast description.

A royal meal in the Indian Mauryan Empire (circa 300 BCE or so):

For court meals, the meat was tempered with spices and condiments to correct its hot, dry nature and accompanied by the sauces of high cuisine. Buffalo calf spit-roasted over charcoal and basted with ghee was served with sour tamarind and pomegranate sauces. Haunch of venison was simmered with sour mango and pungent and aromatic spices. Buffalo calf steaks were fried in ghee and seasoned with sour fruit, rock salt, and fragrant leaves. Meat was ground, formed into patties, balls, or sausage shapes, and fried, or it was sliced, dried to jerky, and then toasted.

Or in around 600 CE, Mexican Teotihuacan eating:

To maize tamales or tortillas were added stews of domestic turkeys and dogs, and deer, rabbits, ducks, small birds, iguanas, fish, frog, and insects caught in the wild. Sauces were made with basalt pestles and mortars that were used to shear fresh green or dried and rehydrated red chiles, resulting in a vegetable puree that was thickened with tomatillos (Physalis philadelphica) or squash seeds. Beans, simply simmered in water, provided a tasty side dish. For the nobles, there were gourd bowls of foaming chocolate, seasoned with annatto and chili.

I’m a vegetarian who has no palette for spice and now all I can think about is eating dog stew made with sheared fresh green chiles and plain beans.

Be careful about reading this book while broke on an airplane. You will try to convince yourself this is all academic and that you’re not that curious about what iguana meat tastes like. You’ll lose that internal battle. Then, in desperation, your brain will start in on a new phase. You’ll tell yourself as you scrape the last of your bag of traveler’s food – walnut meat, dried grapes, and pieces of sweet chocolate – that you wait to be brought a complimentary snack of baked wheat crackers flavored with salt, and a cup of hot coffee with cow’s milk, sweetened with cane sugar, and also that this is happening while you are flying. In this moment, you will be enlightened.


Grindstones are very important throughout history. A lot of cultures used hand grindstones at first and worked into water or animal-driven mills later. You grind grain to get flour, but you also grind things to get oil, spices, a different consistency of root, etc. You spent a lot of time grinding grain. There are a million kinds of hand grindstone. Some are still used today. When Roman soldiers marched around continents, they brought with them a relatively efficient rotary grindstone. They used mules to carry one 60-pound grindstone per 8 people. Every day, a soldier would grind for an hour and a half to feed the eight people. The grain would be stolen from storehouses conquered along the way.


Chapter 3 on Buddhist cuisines throughout Asia was especially great. Buddhism spread as sort of a reaction to the high sacrificial meat-n-grain cuisine of the time – a religious asceticism that really caught on. Ashoka spread it in India in 250 BCE, and slowly over centuries seeped into China. Buddhists did not kill animals (mostly) nor drink alcohol, and ate a lot of rice. White rice, sugar, and dairy spread through Asia. In both China and India, as the rich got into it, Buddhism became its own new high cuisine: rare vegetables, sugar, ghee and other dairy, tea, and elaborate vegetarian dishes. So much for asceticism!

There is an extensive history of East Asian tofu and gluten-based meat substitutes that largely came out of vegetarian Buddhist influence. A couple 1100s and 1200s CE Chinese cookbooks are purely vegetarian and have recipes for things like mock lung (you know, like a mock hamburger or mock chicken, but if you’re missing the taste of lung.) (You might be interested in modern adaptations from Robban Toleno.)

Diets often go with religion. It’s a classic way to divide culture, and also, food and philosophy and ideas about health have always gone hand in hand in hand. Islamic empires spread cuisine over the middle east. Christian empires brought their own food with them to other parts of the world.

A lot of early cuisines in Europe, the Middle East, India, Asia, and Mesoamerica were based on correspondences between types of food and elements and metaphysical ideas. You would try to reach balance. In Europe in the 1500s, during the Enlightenment, these old incorrect ideas about nutrition were replaced with bold new incorrect ideas about nutrition. Instead of corresponding to four elements, food was actually made of three chemical elements: salt, oil, and vapor. The Dutch visionary Paracelsus who thought chemistry could be based on the bible and was a century later called a “master at murdering folk with chemistry”.

Fermenting took on its own magic:

Paracelsus suggested that “ferment” was spiritual, reinterpreting the links between the divine and bread in terms of his Protestant chemistry. When ferment combined with matter (massa in Latin, significantly also the word for bread dough), it multiplied. If this seems abstract, considered what happened in bread making. Bakers used a ferment or leaven[…] and kneaded it with flour and water. A few hours later, the risen dough was full of bubbles, or spirit. Ferment, close to the soul itself, turned lifeless stuff into vibrant, living bodies filled with spirit. The supreme example of ferment was Christ, described by the chemical physicians as fermentum, “the food of the soul.”

Again, cannot stress enough that the details of this food cosmology still got most things wrong. But I think they weren’t far off with this one.

There was an article I had bookmarked years ago about the very early days of microbiology and how many people interpreted this idea of tiny animalcules found in sexual fluid and sperm as literal demons. Does anyone know about this? I feel like these dovetail very nicely in a history of microbiological theology.


Corn really caught on in the 1800s as a food for the poor in East and Central Africa, Italy, Japan, India, and China. I don’t really know how this happened. I assume it grew better in some climates than native grains, like potatoes did in Europe?

Corn cuisine in the Americas knew to treat the corn with lye to release more of its nutrients, kill toxins, and make it taste better. This is called nixtamalization. When corn spread to Eurasia, it was grown widely, but nixtamalization didn’t make it over. The Eurasian eaters had to get those nutrients from elsewhere. They still ate corn, but it was a worse time!

  • In Iceland, where no crops would grow, people would use dried fish called “stockfish” and spread sheep butter on it and eat it instead of bread.

Caloric efficiency was a fun recurring theme. See again, the slow adoption of the potato into Europe. Cuisine has never been about maximizing efficiency. Once bare survival is assured, people want to eat what they know and what has high status in their minds.

I think this is a statement about the feedback cycles of individual people, for instance, subsistence farmers. Suppose you’re a Polish peasant in 1700 and you struggle by year by year growing wheat and rye. But this year you have access to potatoes, a food you somewhat mistrust. You might trust it enough to eat a cooked potato handed to you if you were starving – but when you make decisions about what to plant for a year, you will be reluctant to commit to you and your family to a diet of a possibly-poisonous food (or to a failed crop – you don’t know growing potatoes either). Even if it’s looking like a dry year – especially if it’s looking like a dry year! – you know wheat and rye. You trust wheat and rye. You’ve made it through a lean year of wheat and rye before. You’ll do it again.

People are reluctant to give up their staple crops but they will supplant them. Barley was solidly replaced by the somewhat-more-efficient wheat throughout Europe, millet by rice and wheat in China. But we settled on some ones we like: 

The staples that humans had picked out centuries before 1000 B.C.E. still provide most of the world’s human food calories. Only sugarcane, in the form of sugar, was to join them as a major food source.

Around 1650 in Europe, protestant-derived French cuisine overtook high Catholic cuisine as the main food of the European aristocracy.

Catholic cuisineFrench cuisine
Roasts
Fancy pies
Pottage
Cold foods are bad for you
Fasting dishes
Lard
Pastry
Fancy sauces
Boullions and extracts
Raw salads
Focus on vegetables
Butter

Slowly coming up in more recent times, say the 1700s, was a very slow equalizing in society: 

As more nations followed the Dutch and British in locating the source of rulers’ legitimacy not in hereditary or divine rights but in some form of consent or expression of the will of the people, it became increasingly difficult to deny to all citizens the right to eat the same kind of food.

After the French revolution, high French cuisine was almost canceled in France. Everyone should eat as equals, even if the food was potatoes! Fortunately Unfortunately As it happened, Napoleon came in after not too long and imperial high cuisine was back on a very small number of menus.

Speaking of potatoes and self-governance:

The only place where potatoes were adopted with enthusiasm was in distant [from Europe] New Zealand. The Maoris, accustomed to the subtropical roots that they had introduced to the North Island, welcomed them when introduced by Europeans in the 1770s because they grew in the colder South Island. Trading potatoes for muskets with European whalers and sealers enabled the Maoris to resist the British army from the 1840s to the 1870s.

Meanwhile, in Europe: Hey, we’re back to meat and grain! Britain really prized itself on beef and attributed the strength of its empire to beef. Even colonized peoples were like “whoa, maybe that beef and bread they’re eating really is making them that strong, we should try that.” Here’s a 1900 ad for beef extract that aged poorly:

[Source of this version. The brand of beef extract is spelled out of British colonies.]

That said, I did enjoy Laudan’s defense of British food. Starting in 1800, the British Empire was well underway, and what we now think of as stereotypical British cuisine was developing. It was heavy in sugar and sweets, white bread, beef, and prepared food. During the early industrial revolution, food and nutrition and the standard of living went down, but by the 1850s, all of it really came back.

It is worth noting that few cuisines have been so roundly condemned as nutritional and gastronomical disasters as British cuisine.

But Laudan points out that this food was not the aristocrat food (they were still eating French cuisine). It was the food of the working city poor. This is the rise of the “middling cuisines”, a true alternative between fancy high cuisine of a truly tiny percent of society and humble cuisine of peasants who often faced starvation. For once, they had enough to eat. This was new.

After discussing the various ways in which the diet may have been bland or unappealing compared to neighboring cuisines –

Nonetheless, from the perspective of the urban salaried and working classes, the cuisine was just what they had wished for over the centuries: white bread, white sugar, meat, and tea. A century earlier, not only were these luxuries for much of the British population, but the humble were being encouraged to depend on potatoes, not bread, a real comedown in a society in which some kind of bread, albeit a coarse one, had been central to well-being for centuries. Now all could enjoy foodstuffs that had been the privilege of the aristocracy just a few generations earlier. Indeed, the meal called tea came close to being a true national cuisine. Even though tea retained traces of class distinctions, with snobberies about how teacups should be held, or whether milk or tea should be put into the cup first, everyone in the country, from the royal family, who were painted taking tea, to the family of a textile worker in the industrial north of the country, could sit down to white bread sandwiches or toast, jam, small cakes, and an iced sponge cake as a centerpiece. They could afford the tea that accompanied the meal. Set out on the table, tea echoed the grand buffets of eighteenth-century French high cuisine. [...] What seemed like culinary decline to those Britons who had always dined on high or bourgeois cuisine was a vast improvement to those enjoying those ampler and more varied cuisines for the first time.

[...]

Although to this day food continues to be used to reinforce minor differences in status, the hierarchical culinary philosophy of ancient and traditional cuisines was giving way to the more egalitarian culinary philosophy of modern cuisines.

A lot of this was facilitated by imperialism and/or outright slavery. The tea itself, for instance. But Britain was also deeply industrialized. Increased crop productivity, urbanization, and industrial processing were also making Britain’s home-grown food – wheat, meat – cheaper too. Or bringing these processes home. At the start of this period, sugar had been grown and harvested by slaves to feed Europe’s appetites, but in 1800, Prussian inventors figured out how to make sugar at scale from beets. 

The work was done by men paid salaries or wages, not by slaves or indentured laborers. The sugar was produced in northern Europe, not in tropical colonies. And the price was one all Europeans could afford. 

This was the sugar the British were eating then. Industrialization offered factory production of foods, canning, wildly cheap salt, and refrigeration.

We’re reaching the modern age, where the empires have shrunk and most people get enough calories and have access to industrially-cheap food and the fruits of global trade. Laudan discusses at length the hamburger and instant ramen – wheat flour, fat, meat or meat flavor, low price, and convenience. New theories of nutrition developed and we definitely got them right this time. The empires break up and worldwide leaders take pride in local cuisines, manufacturing a sense of identity through food if needed. Most people have the option of some dietary diversity and a middling cuisine. Go back to that list of things people like to eat. Most of us have that now! Nice!

  • Nigeria is the biggest importer of Norwegian stockfish. It caught on as a relief food delivered during Nigeria’s Biafran civil war. Here’s a 1960s photo of a Nigerian guy posing in a Bergen stockfish warehouse.

Aw, wait, is this a book review? Book review: Great stuff. There’s a lot of fascinating stuff not included in this summary. I wish it had more on Africa but I did like all the stuff about Eurasia that was in there. I feel like there are a few cultures with really really meat heavy cuisines – like Saami or Inuit cuisine – that could have been at least touched on. But also those aren’t like major cuisines and I can just learn about those on my own. Overall I appreciated the unwavering sense of compassion and evenhandedness – discussing cuisines and falsified theories of nutrition without casting judgment. Everyone’s just trying to eat dinner.

Rachel Laudan also has a blog. It looks really cool.

Cuisine and Empire by Rachel Laudan

The book is “Cuisine and Empire” by Rachel Laudan, 2012. h/t my friend A for the recommendation.


More food history from Eukaryote Writes Blog: Triptych in Global Agriculture.

If you want to support my work by chucking me a few bucks per post, check out my Patreon!

Defending against hypothetical moon life during Apollo 11

[Header image: Photo of the lunar lander taken during Apollo 11.]

In 1969, after successfully bringing men back from landing on the moon, the astronauts, spacecraft, and all the samples from the moon surface were quarantined for 21 days. This was to account for the possibility that they were carrying hostile moon germs. Once the quarantine was up and the astronauts were not sick, and extensive biological testing on them and the samples showed no signs of infection or unexpected life, the astronauts were released.

We know now that the moon is sterile. We didn’t always know this. That was one of the things we hoped to find out from the Apollo 11 program, which was the first time not only that people would visit another celestial body, but that material from another celestial body would be brought back in a relatively pristine fashion to earth. The possibilities were huge.

The possibilities included life, although nobody thought this was especially likely. But in that slim chance of life, there was a chance that life would be harmful to humans or the earth environment. Human history is full of organisms wrecking havoc when introduced to a new location – smallpox in the Americas, rats in Pacific Islands, water hyacinth outside of South America. What if there were microbes on the moon? Even if there was a tiny chance, wouldn’t it be worth taking careful measures to avoid the risk of an unknown and irreversible change to the biosphere?

NASA, Congress, and various other federal agencies were apparently convinced to spend millions of dollars building an extensive new facility and take extensive other measures to address this possibility.

This is how a completely abstract argument about alien germs was taken seriously and mitigated at great effort and expense during the 1969 Apollo landing.

Continue reading

Will the growing deer prion epidemic spread to humans? Why not?

Helpful background reading: What’s the deal with prions?

A novel lethal infectious neurological disease emerged in American deer a few decades ago. Since then, it’s spread rapidly across the continent. In areas where the disease is found, it can be very common in the deer there.

Map from the Cornell Wildlife Health Lab.

Chronic wasting disease isn’t caused by a bacteria, virus, protist, or worm – it’s a prion, which is a little misshapen version of a protein that occurs naturally in the nervous systems of deer.

Chemically, the prion is made of exactly the same stuff as its regular counterpart – it’s a string of the same amino acids in the same order, just shaped a little differently. Both the prion and its regular version (PrP) are monomers, single units that naturally stack on top of each other or very similar proteins. The prion’s trick is that as other PrP moves to stack atop it, the prion reshapes them – just a little – so that they also become prions. These chains of prions are quite stable, and, over time, they form long, persistent clusters in the tissue of their victims.

We know of only a few prion diseases in humans. They’re caused by random chance misfolds, a genetic predisposition for PrP to misfold into a prion, accidental cross-contamination via medical supplies, or, rarely, from the consumption of prion-infected meat. Every known animal prions is a misfold of the same specific protein, PrP. PrP is expressed in the nervous system, particularly in the brain – so infections cause neurological symptoms and physical changes to the structure of the brain. Prion diseases are slow to develop (up to decades), incurable, and always fatal.

There are two known infectious prion diseases in people. One is kuru, which caused an epidemic among tribes who practiced funerary cannibalism in Papua New Guinea. The other is mad cow disease, also known as bovine spongiform encephalopathy (BSE) AKA Variant Creutzfeldt-Jakob disease, which was first seen in humans in 1996 in the UK, and comes from cows.

Chronic wasting disease (CWD)…

  • Is, like every other animal prion disease, a misfold of PrP. PrP is quite similar in both humans and deer.
  • Is found in multiple deer species which are commonly eaten by humans.
  • Can be carried in deer asymptomatically.

But it doesn’t seem to infect people. Is it ever going to? If a newly-emerged virus were sweeping across the US and killing deer, which could be spread through consuming infected meat, I would think “oh NO.” I’d need to see very good evidence to stop sounding the alarm.

Now, the fact that it’s been a few decades, and it hasn’t spread to humans yet, is definitely some kind of evidence about safety. But are we humans basically safe from it, or are we living on borrowed time? If you live in an area where CWD has been detected, should you eat the deer?

Sidenote: Usually, you’ll see “BSE” used for the disease in cows and “VCJ” for the disease in humans. But they’re caused by the same agent and this essay is operating under a zoonotic One Health kind of stance, so I’m just calling the disease BSE here. (As well as the prion that causes it, when I can get away with it.)

In short

The current version of CWD is not infectious to people. We checked. BSE showed that prions can spill over, and there’s no reason a new CWD variant will never do the same. The more cases there are, the more likely it is to spill over. That said, BSE did not spill over very effectively. It was always incredibly rare in humans. It’s an awful disease to get, but the chance of getting it is tiny. Prions in general have a harder time spilling over between species than viruses do. CWD might behave somewhat differently but probably will stay hampered by the species barrier.

Why do I think all of this? Keep reading.

North American elk (wapiti), which can carry CWD. This and the image at the top of the article are adapted from a photo from the Idaho Fish and Game department, under a CC BY 2.0 license.

Prions aren’t viruses

I said before that if a fatal neurological virus were infecting deer across the US, and showed up in cooked infected meat, my default assumption would be “we’re in danger.” But a prion isn’t a virus. Why does that matter?

Let’s look at how they replicate. A virus is a little bit of genetic material in a protein coating. You, a human, are a lot of genetic in a protein coating. When a virus replicates, it slips into your cells, and it hijacks your replication machinery to run its genes instead. Instead of all the useful-to-you tasks your genome has planned, the virus’s genome outlines its own replication, assembles a bunch more viruses, and blows up the factory (cell) to turn them loose into the world.

In other words, the virus using a robust information-handling system that both you and it have in common – the DNA → RNA → protein pipeline often called “the central dogma” of biology. To a first approximation, you can just add any genetic information at all into the viral genome, and as long as it doesn’t interfere with the virus’s process, whatever you add will get replicated in there too.

Prions do not work like this. They don’t tap into the central dogma. What makes them so fundamentally cool is that they replicate without touching the replication machinery that everything else alive uses – their replication is structural, like a snowflake forming. The host provides raw material in the form of PrP, and the prion – once it lands – encourages that material to shape in the right way for more to form atop it.

What this means is that you can’t encode arbitrary information into a prion. This isn’t just a factor – it’s not as though a prion runs on a separate “protein genome” we could decipher and then encode what we like into. The entire structure of the prion has to work together to replicate itself. If you made a prion with some different fold in it, that fold has to not just form a stable protein, but to pass itself along as well. They don’t have a handy DNA replicase enzyme to outsource to – they have to solve the problem of replication themselves, every time.

Prions can evolve, but they do it less – they have fewer free options, they’re more constrained than a virus would be in terms of changes that don’t interrupt the rest of the refolding process and that on top of that promulgate themselves.

This means that prions are slower to evolve than viruses. …I’m pretty sure, at least. It makes a lot of sense to me. The thing that this definitely means is that:

It’s very hard for prions to cross species barriers

PrP is a very conserved protein across mammals, meaning that all mammals have a version of PrP that’s pretty similar – 90%+ similarity.* But the devil lies in that 10%.

Prions are finely tuned – to convert PrP to a prion, it basically needs to be identical, or at least functionally identical, everywhere the prion works. It not just needs to be susceptible to the prion’s misfolding, it also needs to fold into something that itself can replicate. A few amino acid differences can throw a wrench in the works.

It’s clear that infectious prions can have a hard time crossing species barriers. It depends on the strain. For instance: Mouse prions convert hamster PrP.** Hamster prions don’t convert mouse PrP. Usually a prion strain converts its usual host PrP best, but one cat prion more efficiently converts cow PrP. In a test tube, CWD can convert human or cow PrP a little, but shows slightly more action with sheep PrP (and much more with, of course, deer PrP.)

This sounds terribly arbitrary. But remember, prion behavior comes down to shape. Imagine you’re playing with legos and duplo blocks. You can stack legos on legos and duplos on duplos. You can also put a duplo on top of a lego block. But then you can only add duplo blocks on top of that – you’ve permanently changed what can get added to that stack.

When we look at people – or deer, or sheep, etc – who are genetically resistant to prions (more on that later), we find that serious resistance can be conferred by single nucleic acid changes in the PrP gene. Tweak one single letter of DNA in the right place, and their PrP just doesn’t bend into the prion shape easily. If the infection takes, it proceeds slower slow enough a person might die of old age before the prion would kill them.

So if a decent number of members of a species can be resistant to prion diseases, based on as little as one amino acid – then a new species, one that might have dozens of different amino acids in the PrP gene, is unlikely to be fertile ground for an old prion.

* (This is kind of weird given that we don’t know what PrP actually does – the name PrP just stands “prion protein” because it’s the protein that’s associated with prions, and we don’t know its function. We can genetically alter mice so that they don’t produce PrP at all, and they show slight cognitive issues but they’re basically fine. Classic evolution. It’s appendices all over again.)
** Sidebar: When we look at studies for this, we see that like a lot of pathology research, there's a spectrum of experiments on different points on the axis from “deeply unrealistic” to “a pretty reasonable simulacrum of natural infection”, like:

1. Shaking up loose prions and PrP in a petri dish and seeing if the PrP converts

2. Intracranial injection with brain matter (i.e. grinding up a diseased brain and injecting some of that nasty juice into the brain of a healthy animal and seeing if it gets sick)

3. Feeding (or some other natural route of exposure) a plausible natural dose of prions to a healthy animal and seeing if that animal gets sick

The experiments mentioned below are based on 1. Only experiments that do 3 actually prove the disease is naturally infectious. For instance, Alzheimer’s disease is “infectious” if you do 2, but since nobody does that, it’s not actually a contagious threat. That said, doing more-abstracted experiments means you can really zoom in on what makes strain specificity tick. 

But prions do cross species barriers

Probably the best counterargument to everything above is that another prion disease, BSE, did cross the species barrier. This prion pulled off a balancing act: it successfully infected cows and humans at the same time.

Let’s be clear about one big and interesting thing: BSE is not good at crossing the species barrier. When I say this, I mean two things:

First, people did not get it often. While the big UK outbreak was famously terrifying, only around 200 people ever got sick from mad cow disease. Around 200,000 cows tested positive for it. But most cows weren’t tested. Researchers estimate that 2 million cows total in the UK had BSE, most of which were slaughtered and entered the food chain. These days, Britain has 2 million cows at any given time.

At first glance, and to a first approximation, I think everyone living in the UK for a while between 1985 and 1996 or so (who ate beef sometimes) must have eaten beef from an infected animal. That’s approximately who the recently-overturned blood donation ban in the US affected. I had thought that was sort of an average over who was at risk of exposure – but no, that basically encompassed everyone who was exposed. Exposure rarely leads to infection.

You’re more likely to get struck by lightning than to get BSE even if you have eaten BSE-infected beef.

Second, in the rare cases the disease takes, it’s slower. Farm cows live short lives, and the cows that died from BSE would have gotten old for the beef industry at 4-5 years post-exposure. They survived at most weeks or months after symptoms began. Humans infected with BSE, meanwhile, can harbor it for up to decades post-exposure, and live an average of over a year after showing symptoms.

I think both of these are directly attributable to the prion just being less efficient at converting human PrP – versus the PrP of the cows it was adapted to. It doesn’t often catch on in the brain. When it does, it moves extremely slowly.

But it did cross over. And as far as I can tell, there’s no reason CWD can’t do the same. Like viruses, CWD has been observed to evolve as it bounces between hosts with different genotypes. Some variants of CWD seem more capable of converting mouse PrP than the common ones. The good old friend of those who play god, serial passaging, encourages it.

(Note also that all of the above differs from kuru, which did cause a proper epidemic. Kuru spread between humans and was adapted for spreading in humans. When looking to CWD, BSE is a better reference point because it spread between cows and only incidentally jumped to humans – it was never adapted for human spread.)

How is CWD different from BSE?

BSE appears in very low, very low numbers anywhere outside the brains and spines of its victims. CWD is also concentrated in the brains, but also appears in the spines and lymphatic tissue, and to a lesser but still-present degree, everywhere else: muscle, antler velvet, feces, blood, saliva. It’s more systematic than BSE.

Cows are concentrated in farms, and so are some deer, but wild deer carry CWD all hither and yon. As they do, they leave it behind in:

  • Feces – Infected deer shed prions in their feces. An animal that eats an infected deer might also shed prions in its feces.
  • Bodies – Deer aren’t strictly herbivorous if push comes to shove. If a deer dies, another deer might eat the body. One study found that after a population of reindeer started regularly gnawing on each other’s antlers (#JustDeerThings), CWD swept in.
  • Dirt – Prions are resilient and can linger, viable, in soil. Deer eat dirt accidentally while eating grass, as well as on purpose from time to time and can be infected.
  • Grass – Prions in the soil or otherwise deposited onto plant tissue can hang out in living grass for a long time.
  • Ticks – One study found that ticks fed CWD prions don’t degrade the protein. If they’re then eaten by deer (for instance, during grooming), they could spread CWD. This study isn’t perfect evidence; the authors note that they fed the ticks a concentration of prions about 1000x higher than is found in infected deer blood. But if my understanding of statistics and infection dynamics is correct, that suggests that maybe 1 in 1000 ticks feeding on infected deer blood reaches that level of infectivity? Deer have a lot of ticks! Still pretty bad!

That’s a lot of widespread potentially-infectious material.

When CWD is in an area, it can be very common – up to 30% of wild deer, and up to 90% of deer on an infected farm. These deer can carry CWD and have it in their tissues for quite some time asymptomatically – so while it frequently has very visible behavioral and physical symptoms, it also sometimes doesn’t.

In short, there’s a lot of CWD in lots of places through the environment. It’s also spreading very rapidly. If a variant capable of infecting both deer and humans emerged, there would be a lot of chances for possible exposure.

Deer on a New Zealand deer farm. By LBM1948, under a CC BY-SA 4.0 license.

What to do?

As an individual

As with any circumstance at all, COVID or salmonella or just living in a world that is sometimes out to get you, you have to choose what level of risk you’re alright with. At first, writing this piece, I was going to make a suggestion like “definitely avoid eating deer from areas that have CWD just in case your deer is the one that has a human-transmissible prion disease.” I made a little chart about my sense of the relative risk levels, to help put the risk in scale even though it wasn’t quantified. It went like this:

Imagine a spectrum of risk of getting a prion disease. On one end, which we could call "don't do this", is "eating beef from an animal with BSE". Close to that but slightly less risky is "eating deer from an animal with CWD". On the other very safe end is "eating beef from somewhere with known active BSE cases". This entire model is wrong, though.

But, as usual, quantification turns out to be pretty important. I actually did the numbers about how many people ever got sick from BSE (~200) and how many BSE-infected cows were in the food chain (~2,000,000), which made the actual risk clear. So I guess the more prosaic version looks like this:

Remember that spectrum of risk? Well, all of these risks are infinitesimal. Worry about something else! Eating beef from an animal with BSE is still more dangerous than eating deer from an animal with CWD, which is more dangerous than eating beef from somewhere without known active BSE cases - but all of these are clustered very, very far on the safe side of the graph.

…This is sort of a joke, to be clear. There’s not a health agency anywhere on earth that will advise you to eat meat from cows known to have BSE, and the CDC recommends not eating meat from deer that test positive for CWD (though it’s never infected a human before.)

On top of that, the overall threat is still uncertain because what you’re betting on is “the chance that this animal will have had an as-of-yet undetected CWD variant that can infect humans.” There’s inherently no baseline for that!

We don’t know what CWD would act like if it spilled over. It might be more infectious and dangerous than other infectious prion diseases we’ve seen – remember, with humans, the sample size is 2! So if CWD is in your area and it’s not a hardship to avoid eating deer, you might want to steer clear. …But the odds are in your favor.

As a society

There’s not an obvious solution. The epidemic spreading among deer isn’t caused by a political problem, it’s from nature.

The US is doing a lot right: mainly, it is monitoring and tracking the spread of the disease. It’s spreading the word. (If nothing else, you can keep track of this by subscribing to google alerts for “chronic wasting disease”, and then pretty often you’ll get an email saying things like “CWD found in Florida for the first time” or “CWD found an hour from you for the first time.”) It is encouraging people to submit deer heads for testing, and not to eat meat from deer that test positive. The CDC, APHIS, Fish & Wildlife Service, and more are all aware of the problem and participating in tracking it.

What more could be done? Well, a lot of the things that would help a potential spillover of CWD look like actions that can be taken in advance of any threatening novel disease. There is research being done on prions and how they cause disease, better diagnostics, and possible therapeutics. All of these are important. Prion disease diagnosis and treatment is inherently difficult, and on top of that, has little overlap with most kinds of diagnosis or treatment. It’s also such a rare set of diseases that it’s not terribly well studied. (My understanding is that right now there are various kinds of tests for specific prion diseases – which could be adapted for a new prion disease – that are extremely sensitive although not particularly cheap or widespread.)

I don’t know a lot about the regulatory or surveillance situation vis-a-vis deer farms, or for that matter, much about deer farms at all. I do know that they seem to be associated with outbreaks, and heavy disease prevalence once there is an outbreak. That’s a smart area to an eye on.

If CWD did spill over, what would happens?

It will probably also take time to locate cases and identify the culprit, but given the aforementioned awareness and surveillance of the issue, it ought to take way less time than it took to identify the causative agent of BSE. Officials are already paying attention to deaths that could potentially be CWD-related, like neurodegenerative illnesses that kill young people.

First, everyone gets very nervous about eating venison for a while.

After that, I expect the effects will look a lot like the aftermath of mad cow disease. Mad cow disease, and very likely a hypothetical CWD spillover, would not be transmissible between people in usual ways – coughing, skin contact, fomites, whatever.

It is transmissible via unnatural routes, which is to say, blood transfusions. You might remember how people who’d spent over 6 months in Britain couldn’t donate blood in the US until 2022, a direct response to the BSE outbreak. Yes, the disease was extremely rare, but unless you can quickly and cheaply test incoming blood donations, a donor could donate blood to multiple people. Suppose some of them donate blood down the line. You’d have a chain of infection and a disease with a potentially decades-long incubation period. And remember, the disease is incurable and fatal. So basically, the blood donation system (and probably other organ donation) becomes very problematic.

That said, I don’t think it would break down completely. In the BSE case, lots of people in the UK eat beef from time to time – probably most people. But with a deerborne disease, I would guess that a lot of the US population could confidently declare that they haven’t eaten deer within the past, say, year or so (prior to a detected outbreak.) So I think there’d be panic and perhaps strain on the system but not necessarily a complete breakdown. Again, all of this is predicated on a new prion disease working like known human prion diseases.

Genetic resistance

One final fun fact: People who have a certain allele in the PrP gene – specifically, have the genotype PRNP 129M/V or V/V – are incredibly genetically resistant to known infectious prion diseases. If they do get infected, they survive for much longer.

It’s also not clear that this would hold true for a hypothetical CWD crossover to humans. But it is true for both kuru and BSE. It’s also partly (although not totally) protective against sporadic Creutzfeldt-Jakob disease.

If you’ve gotten a service like 23&me, maybe check out your data and see if you’re resistant to infectious prion diseases. Here’s what you’re looking for:

129M/V or V/V (amino acids), or G/G or A/G (nucleotides) – rs1799990

If you instead have M/M (amino acids) or A/A (nucleotides) at that site, you’re SOL at a higher but still very low overall risk.


Final thoughts

  • I think exercises like “if XYZ disease emerges, what will the ramifications and response be” are valuable. They lead to questions like “what problems will seem obvious in retrospect” and “how can we build systems now that will improve outcomes of disasters.” This is an interesting case study and I might revisit it later.

  • Has anyone reading this ever been struck by lightning? That’s the go-to comparison for things being rare. But 1 in 15,000 isn’t, like, unthinkably rare. I’m just curious.

  • No, seriously, what’s the deal with deer farms? I never think about deer farms much. When I think of venison, I imagine someone wearing camo and carrying a rifle out into a national forest or a buddy’s backyard or something. How many deer are harvest from hunting vs. farms? What about in the US vs. worldwide? Does anyone know? Tell me in the comments.

This essay was crossposted to LessWrong. Also linked at the EA Forums.

If you want to encourage my work, check out my Patreon. Today’s my birthday! I sure would appreciate your support.

Also, this eukaryote is job-hunting. If you have or know of a full-time position for a researcher, analyst, and communicator with a Master’s in Biodefense, let me know:

Eukaryote Writes Blog (at) gmail (dot) com

In the mean time, perhaps you have other desires. You’d like a one-off research project, or there’s a burning question you’d love a well-cited answer to. Maybe you want someone to fact-check or punch up your work. Either way, you’d like to buy a few hours of my time. Well, I have hours, and the getting is good. Hit me up! Let’s chat. 🐟