Category Archives: history

Did photos of the 1917 Miracle of the Sun at Fatima prove the sun was at an impossible place in the sky?

Man, I don’t think so.

Background

The Miracle of the Sun at Fatima was a 1917 event predicted by several children who said they were visited by the Virgin Mary. Thousands of people showed up in the field in Fatima at the predicted date, and witnessed odd celestial phenomena. It was validated by the Catholic Church as a true miracle.

Accounts are not unanimous, but witnesses generally reported the sun as moving around, changing colors, and “spinning” in the sky.

This has a couple interesting features: It was documented at the time and many thousands of people were there, and later gave their eyewitness accounts – and it was photographed by journalist Judah Bento Ruah for the Portuguese newspaper O Século (“The Century”).

The sky does not actually show up in the photos, which is expected – photographing the sun was especially hard in 1917, and Ruah didn’t show up expecting the sun in particular to be doing something weird. (Nor were the pilgrims – they were expecting a miracle, but didn’t know what it would be.) Ruah’s photos, first published soon after the event, sure do show a lot of people gathered in a field, staring rapt at the sky.

Dr. Phillipe Dalleur analyzed these, and wrote a paper (“Fatima Pictures and Testimonials: in-depth Analysis”, Scientia et Fides, September 2021) arguing that these photos indicate that the light source in the photo corresponding to the observed sun is at about 29°, not at the expected solar angle of about 40° – so it’s concrete evidence of a true miracle, right?

More recently, Ethan Muse at Motiva Credibilitatis uses this point to argue the same. Muse and Dalleur points to other aspects of the event too – e.g. even if the apparent odd behavior of the sun was due to really weird meteorological conditions, how were the children apparently able to predict that? What were the people seeing? What were the children reporting?

See e.g. Evan Harkness-Murphy at The Magpie for possible explanations for some of these other attributes that do not require supernatural explanations.

But as a woman with a fondness for A) very concrete claims of unexpected phenomena and B) OSINT, I’m only going to be looking into the angle-of-the-sun thing.

Because I do agree that if these photos showed that the sun was at an odd angle, that would be evidence for a miracle. The photos were first published days after the event, so while there were ways to doctor photos at the time, they would have had to have done it quickly. The photos were taken by a Jewish journalist who did not have a clear motive to fake evidence of a Christian miracle (attributed to the Virgin Mary). Lots of people corroborated the date and approximate time of the event, and the sun is, of course, one of the most reliable physical phenomenon there is. If that can change, maybe it has to be God doing it.

But I read Dalleur (2021) and I’m really not sold that the photos indicate anything celestially weird going on.

Dalleur’s data

Most of Dalleur’s argument comes from one photo, listed on the Shrine of Fatima website and in his paper as D115. It’s this one – we’ll be referring back to it a lot.

A black-and-white photo showing many people congregated on a rocky hillside, in suits and shawls and dresses, mostly staring up at the sky.

I’m going to ignore the part in Dalleur’s paper about the dry parts on the clothing. I think it’s explainable by conventional means. People don’t necessarily stay at exactly the same angle for long and shift around, and might be uncovering and covering themselves to dry off after being rained on at idiosyncratic angles. I don’t think this tells skeptics much about the light source.

Dalleur has clear photos; we don’t

A lot of Dalleur’s argument has to do with shadows in the photographs. The shadows are really faint, which is what we’d expect – everyone agrees that it had recently stopped raining, so the sky is cloudy and the lighting is diffuse.

The public photos of these photos are low-resolution and do not really show shadows. Dalleur obtained high-quality scans of these photographs from the originals at the Shrine of Fatima. These versions show the detailed shadows Dalleur is using for his analysis, and as far as I can tell, aside from cropped excerpts included in Dalleur’s paper, these higher-resolution versions are not available online.

Compare what sure looks like the shadow of this cane in Dalleur’s version, to the enlarged version of the photo I was able to download from the Shrine of Fatima’s website. You can’t really tell there’s a shadow in the lower-res public version.

Left: Excerpt from Dalleur (2021). Right: same version from publicly available version of the photo.
The shadow in question. Like, it’s faint either way, but it looks like a shadow on the left and I’d be hard-pressed to tell that it was anything on the right.

More shadows would be really useful for analysis. (I sent Phillipe Dalleur an email asking for the full versions but haven’t heard back yet.) It would be much easier to evaluate his claims if these were available.

On using the sun in OSINT

I went into this hoping that one of the photos had a specific shadow. Nick Waters at Bellingcat outlines how to use shadows for chronolocation – that is, identifying when a photo was taken.

In short, if you have a photo with an object lit by the sun and casting a shadow –

  • Where the casting object is roughly vertical
  • Where the shadow is cast on level ground

And…

  • You know where the photo was taken
  • You know what date the photo was taken
  • You know vaguely what direction the shadow is pointing

…Then you can tell what time the photo was taken.

If you have other parts of this information, like you do know the time, you can also tell e.g. where the photo was taken, or at least narrow it down to certain parts of the world.

In this case, we know where, when, AND what time the photo was taken – at least roughly. Our question is whether all of this information lines up the way we expect. If the shadow looks different, then the light source must be different, like, say, if the sun or a sun-like object is moving miraculously around the sky.

But alas, there is no shadow in these photos that meets these criteria. This isn’t surprising. Everyone agrees that it was raining shortly beforehand, so the sky was still cloudy and the lighting was diffuse. (Also, again, the publicly available photos are so low-quality you can hardly make out any specific shadows at all.)

All of this is to say that Dalleur has to use much more complicated methods to estimate the sun’s angle, and I see why. There aren’t good shadows for the simpler method described by Waters.

I just don’t think the methods he used instead are good enough to draw the conclusions he draws.

Are there really two light sources?

First of all, Dalleur’s argument includes an assumption that there are two light sources casting shadows – the light source corresponding to the thing the witnesses identified as the moving sun (“Light source a” AKA Lsa), plus a diffuse, higher-up light source, probably sunlight bouncing off and/or filtering through a cloud (Lsb).

Dalleur’s proposed setup.

A quick note – I grew up in Seattle so I’ve experienced plenty of overcast and weirdly-sunny-sort-of-overcast days. I don’t remember seeing a double shadow under these conditions. I can’t rule it out, and of course Dalleur isn’t claiming for certain that Lsa is the sun or works just like the sun, but I wouldn’t go in expecting a double shadow in these conditions.

In any case, Dalleur only points out one example of a double shadow, cast by a rock on rocky ground. (Left, below.) While the excerpt does look like a double shadow, it could also be the shadow of the rock next to it, or the ground color, or water from the rain that hasn’t dried yet.

There’s also an example of the two light sources cast on a curved umbrella handle (right, below), but the “separation” between the two light sources could be a darker part on the wood of the handle.

Excerpt from Dalleur's paper, captioned: Figure 6. D115 details. Left: double shadow, a (from LSa) and b (from LSb) of a flat
stone on a sloped surface, (☼: shadowless area; arrows: direction of LSa
and LSb rays). The deep black of a+b bares a low ambient light. Right: LSa
and LSb specular reflections on umbrella’s handle. Note also the lapels
shadow on the upright jacket front, and the reflections on the cornea of
the eyes.

Like, it’s just the rock. I guess all I’m saying is that if you assumed there was only one shadow cast, and evened out Lsa and Lsb, that would look more like one light source higher in the sky (more like the expected location of the sun.)

Doodle of shadows cast from a rock. Two light sources (one high up, one at a sharp angle) cast two shadows, but it looks kind of like the shadow cast from one light source that's just between the two, altitutdewise.
Like, these would look pretty similar, especially if the shadows weren’t clear and all you had to go from was one old photo.

I’m unclear to what degree these are explicitly built into the final analysis that gets us to the final 29°-Lsa-angle number. Dalleur looks at 4 shadows and as far as I can tell, only one of them (this rock) features a double shadow.

In any case, Dalleur spends time on the “two light sources casting shadows” thing and I’m not sold based on the example given.

I’m pretty sure there’s too much assumption to make the math reliable

First of all, here’s a (angle-accurate) sketch of the situation in question. According to Dalleur and to live testimony collected by John de Marchi in “The True Story of the Miracle at Fatima”, the miracle takes place for a few minutes somewhere between noon and 1:30 in solar time, Fatima, Portugal, October 13 1917. so according to SunCalc.org, the sun should be at a solar altitude of 39°-42°. Dalleur agrees. He suggests that the apparent “sun” (his Lsa) angle in the photo he analyzes is 25°-32°.

Diagram of the expected solar altitude (39-42 degrees in the sky) vs Dalleur's estimated Lsa altitude (25-32 degrees in the sky). They're definitely different but they're not THAT far apart.

To be clear, if this were true, astronomically, this would be bananas – the sun is so reliable that if we had a good piece of evidence that it had been anything else (say, there were a sundial on level ground in the photo that clearly had an incongruent shadow) we should be very surprised indeed. But I show you this drawing so you get a sense that we’re not dealing with, like, a hugely different angle, especially as estimation error comes into play. Which it will.

As far as I can tell, the main argument about that 29°-angle comes from photo D115 (the same photo we’ve been looking at so far) and some math about photo optics and what “pitch” and “roll” angles the photo was taken at.

I can’t pretend I totally understand or can replicate the math used. But I can tell there’s a lot of estimation. (See around Figure 10 and 11 in the paper.)

Figure from the paper showing various angles used by Dalleur to model the photo environment and light source.
Figure 11 from Dalleur (2021). Featuring: More optics than I know what to do with. Sorry.

While not every estimate used is spelled out, it looks like Dalleur estimated:

  • Where the obscured horizon is
  • the height the camera is at
  • how far the camera seems to be from the reference points
  • hanging thread from some of the clothing in the photo, although we’re not sure what and he acknowledges that some other hanging thread on a different person is at a different angle (because of wind)
  • The camera being correctly leveled every time, even though we’re told Ruah was moving the camera and taking pictures at an incredibly fast (for 1917) rate of one per minute
  • the slope of the ground
  • angles of the 4 shadows indicating the direction of Lsa (more on this soon)
  • The focal distance of the camera, which was set manually by the photographer and which is estimated partly by using a reference for what good photography practices at the time were.

Even if the math comes up as putting the sun at a ~29° altitude, I’d be shocked if there’s not enough error in there to account for a possible 8% difference in actual solar/Lsa angle.

Also, those shadow angles are sus

As best I understand it, the shadow angles are the really important part of this calculation. The whole thing hinges on those angles being unexpected. But we’re looking at total 4 shadows, none of which are cast on level surfaces, and some of which I don’t even think are cleanly cast shadows indicating the direction of Lsa. For 3/4, Dalleur does not indicate what he thinks these angles are.

Figure 10 from Dalleur (2021.) The photo from above, highlighting 4 shadows whose angles Dalleur used to estimate the angle of Lsa.

One of these is the double-shadowed rock from before.

Another (bottom right of the above image) is, apparently, another rock. I don’t even know what precisely I’m looking at here. But it doesn’t look like it’s cast on a level surface. That angle is going to drastically effect what angle the shadow seems to be at.

Another is the hand of the boy on the left side of the photo, and again, I’m not sure what I’m supposed to be gleaning here or what Dalleur thought that angle was. Is it the shadow of the pocket on the thumb? Thumbs are curved! His hand might be angled!

Another is on this hat.

Another figure from Dalleur 2021, captioned: Figure 9. Perspective direction of shadows on D115 (denoise by Topaz®; contrast by
Blender®). The projected direction of LSa rays varies from 32.2° to about
37.40° (symmetrical reflections on a perpendicular knob). 1: tangent shadow
of the crown on the brim of the hat. 2: dark grazing shadow on a back white
object, of the brim in contact with the stick. 3: thin brim’s shadow on the
buttock, almost in contact with it. 4: shadow on crown’s crease.

Look at that line 4 on the top of the hat.

The same image from the figure above but without overlays, indicating that the shadow is indeed kind of vague. That's a hat, alright.

To me that could easily be the fold on the “bowl” of the hat, and not something that would tell us much about the specific angle of the light source. And either way, it’s cast on the “bowl” of the hat, which we wouldn’t assume to be flat or level.


I think someone with more patience for trig than me, or better yet, experience with Blender or another 3d modelling software, could take a crack at this more definitively – like, setting up this scenario with some models and playing with the angles and light sources (Blender can definitely simulate one diffuse light source vs. two light sources at different angles, etc) and seeing what looks most like the shadows in the photos. I expect the shadows in the photograph will be totally in line with natural phenomena and the natural position of the sun.


Thanks for reading – this is a post a friend encouraged me to hammer out. I’ll link his related analysis when it’s up. I’ll be back to biosecurity soon, I promise. I have a freaky virus to tell you about and everything.

This post is mirrored to Eukaryote Writes Blog and Substack.

Support Eukaryote Writes Blog on Patreon.

Book review: Air-borne by Carl Zimmer

Remember early 2020 and reading news articles and respected sources (the WHO, the CDC, the US surgeon general…) confidently asserting that covid wasn’t airborne and that wearing masks wouldn’t stop you from catching it?

Man, it’s embarrassing to be part of a field of study (biosecurity, in this case) that had such a public moment of unambiguously whiffing it.

a framed relic - an internet archive screenshot of a World Health Organization graphic saying, among other things, "Masks are effective only when used in combination with frequent hand-cleaning" - and a tweet from the US Surgeon General saying "Seriously people- STOP BUYING MASKS! They are NOT effective in preventing general public from catching #Coronavirus." This framed relic is captioned "Whoops" - early 2020.

I mean, like, on behalf of the field. I’m not actually personally representative of all of biosecurity.

I did finally grudgingly reread my own contribution to the discourse, my March 2020 “hey guys, take Covid seriously” post, because I vaguely remembered that I’d tried to equivocate around face masks and that was really embarrassing – why the hell would masks not help? But upon rereading, mostly I had written about masks being good.

The worst thing I wrote was that I was “confused” about the reported takes on masking – yeah, who wasn’t! People were saying some confusing things about masking.

I mean, to be clear, a lot of what went wrong during covid wasn’t immediately because biosecurity people were wrong: biosecurity experts had been advocating for years for a lot of things that would have helped the covid response (recognition that bad diseases were coming, need for faster approval tracks for pandemic-response countermeasures, need for more surveillance…) And within a couple months, the WHO and the Surgeon General and every other legitimate organization was like “oh wait we were wrong, masks are actually awesome,” which is great.

Also, a lot went right – a social distancing campaign, developing and mass-distributing a vaccine faster than any previous vaccine in history – but we really, truly dropped the ball on realizing that COVID was airborne.

In his new book Air-borne: The hidden history of the air we breathe, science journalist Carl Zimmer does not beat around this point. He discusses the failure of the scientific community and how we got there in careful heartbreaking detail. There’s also a lot I didn’t know about the history of this idea, of diseases transmitting on long distances via the air, and I will share some of it with you now.


Throughout human history, there has been, of course, a great deal about confusion and debate about where infectious diseases came from and how they were spread, both before and to some extent after Louis Pasteur and Robert Koch et al illuminated the nature of germ theory. Germ theory and miasma theory were both beloved titans. Even after Pasteur and Koch had published experiments, the old order, as you may imagine, did not go quietly; there were in fact series of public debates and challenges with prizes and winners that pitted e.g. Pasteur up against old standouts of miasma theory.

One of the reasons that airborne transmission faced the pushback it did is that it was seen as a waffley compromise of a return to miasma theory. What, like both a germ and the air could work together to transmit a disease? Yeah, sure.

Airborne transmission was studied extensively in the 1950s. It eventually became common knowledge that tuberculosis was airborne. That other diseases, like colds and flu and measles, could be airborne, was the subject of intense research by William and Mildred Wells, whose vast body of work included not only proving airborne transmission but experimenting with germ-killing UV lights in schools and hospitals — and who remain virtually unknown to this day.

Let us acknowledge a distinction often made between droplet-borne diseases, where heavy wet particles might fly from a sneeze or cough for some six feet or so, to airborne diseases, which might travel across a room, across a building, wafting about in the air for hours, et cetera. This distinction is regularly stressed in the medical field although it seems to be an artificial dichotomy – spewed particles seem to be on a spectrum of size and the smaller ones fly farther, eventually becoming so small they’re much more susceptible to vagaries in air currents than to gravity’s downward pull. Droplet-borne diseases have been accepted for a long time, but airborne diseases were thought by the modern medical establishment to be very rare.

(I forget if Zimmer makes this point, but it’s also easy to imagine how it’d be easier for researchers to notice shorter-distance droplet-borne transmission – the odds a person comes down with a disease relates directly to how many disease particles they’re exposed to, and if you’re standing two feet away from a coughing person, you’ll be exposed to more of the droplets from that blast than if you’re ten feet away. Does that make sense? Here’s a diagram.)

A drawing of two stick figures standing in a cone of purple mist being fired from a spray can. The figure further out in the spray looks at their arm and says "Hmm, I'm slightly more purple than I'd prefer to be". The figure closer to the nozzle, being hit more intensely by the direct blast of purple, screams "AAAAAAAAAAAAAAAA".
Aerosols disperse from their source over distances.

(But that doesn’t mean that ten-foot transmission will never happen. Just that it’s less likely.)

Why didn’t the Wells’ work catch on? Well, it was controversial (see the ‘return to miasma’ point above’), and also, they were just unpleasant and difficult to work with. They were offputting and argumentative. Also Mildred Wells was clearly the research powerhouse and people didn’t want to hire just her, for some reason.* Their colleagues largely didn’t want to hire and fund them or to publish their work. We have a cultural concept of lone genius researchers, but these are, in terms of their impact, often fictional – science is a broadly collaborative affair.

The contrast in e.g. Koch and Pasteur’s status vs. William and Mildred Wells made me think about the nature of scientific fame. I wonder if most generally-famous scientists were famous in their lifetimes too. Koch and Pasteur were. Maybe most famous scientists are also famous because they’re also good science communicators. I’m sure that also interplays with getting your ideas out into the world – if you can write a great journal article that sounds like what you did is a big deal, more people will read it and treat it like a big deal.

The Wells were not a big deal, not in their day nor after. Their work, studying disease and droplet transmission and the possibility of UV lamps for reducing disease transmission (include putting lamps up in hospitals and schools), struggled to find publication and has only recently been unearthed as a matter of serious study.

Far UV lamps are the hot new thing in pandemic and disease response these days. Everyone is talking about them.

There’s variations and nuance, but the usual idea works like this: you put lamps that emit germ-killing UVC light up in indoor spaces where people spend a lot of time. UVC light can causes skin cancer (albeit less than its higher-energy cousin, UVB). But you can just put the lamps in ventilation systems or aimed up at the ceilings, where they don’t point at people or skin but instead kill microbes in the air that wafts by them. Combined with ventilation, you can sterilize a lot of air this way.

William and Mildred Wells found results somewhere in between “positive” and “equivocal” – the affect being stronger when people spent more of their day under the lamps, e.g., pretty good in hospital wards and weaker in schools.

They’re not too expensive and could be pretty helpful, especially if they became de facto in places where people spend a lot of time – and especially in hospitals. Interest in this is increasing but there’s not much in the way of requirements or incentives for any such thing yet.

*Sexism. Obviously the reason is sexism.


The other heroes of the book are the Skagit Valley Chorale. In March 2020 a single Skagit Valley Chorale choir rehearsal transmitted multiple fatal covid cases during a single choir practice. Afterwards, the survivors worked with researchers, who figured out where everyone was standing, where points of contact were, did interviews and mapping and figured that there had been no coughing or sneezing, that the disease had in fact been flung at great distances just by singing – that it was really airborne. (There were other studies in other places indicating the same thing.) But this specific work of contact tracing was a focus and was instrumental and influential, and cooperation between academic researchers and these grieving choir members formed an early, distinct piece of evidence that covid was indeed airborne.

I think being part of research like this – an experimental group, opting into a study – is noble. It’s selfless, and what a heroic and beautiful thing to do with your grief and your suffering, to say: “Learn everything you can from this. Let what happened here be a piece in the answer to it not happening again.”

(Yeah, I got dysentery for research, but listen, nobody in the Skagit Valley Chorale got $4000 for their contributions. They just did it for love. That’s noble.)


There was also a cool thread of the story that involved microbiologists like Fred Meier and their interactions with the early age of aviation – working with Lindbergh and Earhart and balloons and the earliest days of commercial aviation to strap instruments to their crafts and try to capture microbes whizzing by.

And they found them – bacteria, pollen, spores, diseases, algaes, visitors and travellers and tiny creatures that may have lived up their all their lives. Another vast arm of the invisible world of microbes.


I’ve been interested in the mechanics of disease transmission for almost as long as I’ve been interested in disease. In freshman year in college I tried an ambitious if bungled study on cold and flu transmission in campus dorms. (That could have been really cool if I’d known more about epidemiological methods or at least been more creative about interpreting the data, I think. Institutions are famously one of the easier places to study infectious diseases. Alas.) Years later I tried estimating cold and flu transmission in more of an EA QALY/quantifying-lost-work days sense and really slammed into the paucity of transmission studies. And then covid came, and covid is covid – we probably got the best data anyone has ever gotten on transmission of an airborne/dropletborne disease.

More recently, I’ve been doing some interesting research into rates and odds of STD transmission, and there’s a lot more there: there’s a lot of interest and money in STD prevention, and moreover, stigmatized as they are, it’s comparatively easy to determine when certain diseases were caught. They transmit during specific memorable occasions, let’s put it like that.

For common air- or droplet-borne diseases? Actual data is thin on the ground.


I think this is one of the hard things about science, and about reasoning in and out of invisible, abstract worlds – math, statistics, physics at the level of atoms, biology at the level of cells, ecology at the level of populations, et cetera. You know some things about the world without science, like, you don’t need to read a peer-reviewed paper to know that you don’t want to touch puke, and you don’t need to consult with experts in order to cook pasta. The state of ambient knowledge around you takes care of such things.

And then there’s science, and science can tell you a lot of things: like, a virus is made of tiny tiny bricks made of mucus, and your body contains different tiny virus detectors (also themselves made of mucus), and we can find out exactly which mucus-bricks of the virus trigger the mucus-detectors in your body, and then we can like play legos with those bricks and take them off and attach them to other stuff. We know about dinosaurs and planets orbiting other stars.

And science obviously knows and tells us some useful stuff that interacts with our tangible everyday world of things: like, you can graft a pear tree onto a quince tree because they’re related. A barometer lets you predict when it’s going to rain. You can’t let raw meat sit around at room temperature or you might get a disease that makes you very sick. Antibiotics cure infections and radios, like, work.

And then there’s some stuff that’s so clearly at this intersection that you might assume it’s in this domain of science. Like, we know how extremely common diseases transmit, right? Right?

It used to blow my mind that we know enough about blood types to do blood transfusions and yet can’t predict the weather accurately. Now it makes visceral sense to me, because human blood mostly falls into four types relevant to transfusions, and there are about ten million factors that influence the weather. (Including bacteria.)

Disease transmission is a little bit like predicting the weather, because human bodies and environments are huge complicated machines, but also not as complicated, because the answer is knowable – like, you could do tests with a bunch of human subjects and come up with some reasonable odds. We just… haven’t.


Actually, let’s unpack this slightly, because I think it’s easy to assume that airborne (or dropletborne) disease transmission would be dirt cheap and very easy to study experimentally.

To study disease transmission experimentally, you need to consider three things (beyond just finding people willing to get sick):

First, a source of infection. If you’re trying to study a natural route of infection like someone coughing near you, you can’t just stick people with a needle that has the disease – you need a sick person to be coughing. For multiple reasons, studies rarely infect a person on purpose with a disease, let alone two groups of people via different routes (the infection source and the people becoming infected) – you might need to find a volunteer naturally sick with the disease to be Patient Zero.

Second, exposure. People are exposed to all sorts of air all the time. If you go about your everyday life and catch a cold, it’s really hard to know where you got the cold from. You might have a good guess, like if your partner has a cold you can make a solid statistical argument about where you were exposed to the most cold germs – or you might have a suspicion, like someone behind you on the bus coughing – but mostly, you don’t know. A person in a city might be exposed to the germs of hundreds on a daily basis. In a laboratory, you can control for this by keeping people isolated in rooms with individually-filtered air supplies and limited contact with other people.

Third, when a person is exposed to an infectious disease, it takes time to learn if they caught it or not. The organism might get fought off quickly by the body’s defenses. Or the organism might find a safe patch of tissue to nestle in and grow and replicate – the incubation period of the infection. It’ll take time before they show symptoms. Using techniques like detecting the pathogen itself, or detecting an immune response to the pathogen, might shave off time, but not a lot, you still have to wait for the pathogen to build up to a detectable level or for the immune response to kick in. Depending on the disease, they also may have caught a silent asymptomatic infection, which researchers only stand a chance of noticing if they’re testing for the presence of the pathogen (which depending on the pathogen and the tests available for it, might entail an oral or nasal swab, a blood test, feces test…)

So combine these things – you want to test a simple question, like “if Person A who is sick with Disease X coughs ten feet away from Person B, how likely is Person B to get sick?” The absolute best way to get clean and ethically pure data on this is to find a consenting Person A who is sick with the flu, find a consenting Person B (ideally who you are certain is not already sick, perhaps by keeping them in an isolated room with filtered air beforehand for the length of the incubation period), have Person A stand ten feet away and cough, and then sweep Person B into an isolated room with filtered air for the entire plausible incubation period, and then see if they get sick, and then have this sick person cared for until they are no longer infectious.

And then repeat that with as many Persons B as it takes to get good data – and it might be that only, like, 1% of Persons B get sick from a single sick person coughing 10 feet away from them. So then you need, I don’t know, 1000 Persons B at least to get any decent data.

It’s not impossible. It’s completely doable. I merely lay this out so that you can see that producing these kinds of basic numbers about disease transmission would instantly entail a lot more expense and human volunteers than you might think.

A friend of mine did human challenge trials studying flu transmission, and they did it similarly to this – removing the initial waiting period (which is fair, most people are not incubating the flu at any given moment) and with more intense exposure events, with multiple Persons B in a room actively chatting and passing objects around with a single Person A for an hour, and then sending Persons B to a series of hotel rooms for a few days to see if anyone got sick.

(What about going a step further: just having Person A and Persons B in a room, Person A coughs, and then send Persons B home and call them a few days later to ask about symptoms? You could compare this to a baseline of Persons C who were not in a room with a Person A coughing (“C” for “control”). Well, I think this would get you valid and usable numbers, but exposing people to infectious diseases that could then be freely passed on to nonconsenting strangers is considered a “bioethics no-no” – and so researchers have, to my knowledge, mostly not tried this.)

(Maybe someone did that in the sixties. That seems like something they’d have done back then.)

The point is, it’s like, expensive and medium hard to study airborne disease transmission experimentally. Adjust your judgment accordingly.


Anyway, fascinating book about the history of the history of that which you think might be better understood by virtue of being a life-and-death matter millennia old, but which is, alas, not.

Here are some questions I was left with at the end of the book:

  • What influences whether pathogens are airborne-transmissible? Does any virus or spore coughed up from the lungs have about the same chance of becoming airborne, or do other properties of the microbe play a role? (I was hoping the book would explain this to me, but I think the research here may not exist.)
  • Zimmer is clearly pro-far-UV but the Wells’ findings on far UV lamps in schools was in fact pretty equivocal – do we have reason to think current far UV would fare better? (I know I linked a bunch of write-ups but I’m not actually caught up on the state of the research.)
  • Some microbes travel for long distances, hundreds of miles or months, while airborne. Often high in the earth’s atmosphere. How are these microbes not all obliterated by solar UV?

Find and read Air-borne by Carl Zimmer.


Support Eukaryote Writes Blog on Patreon. Sign up for emails for new posts in the blog sidebar.

Crossposted to: [EukaryoteWritesBlog.comSubstackLesswrong]

Through the Looking Glass, and What Zheludev et al. (2024) Found There. By Georgia Ray. Every time microbiologists develop a new way of looking, they find that there's more to see than they expected.

Eukaryote writes for Asterisk Magazine

See my piece on the history of microbiology and the vast, invisible worlds that come into focus every time we figure out how to look closer:

Through the Looking Glass, and What Zheludev et al. (2024) Found There at Asterisk Magazine


I’ve written for Asterisk before: What I won’t eat, on arriving at an equilibrium on the “it’s bad when animals suffer” vs. “but animal products taste good” challenge.

Carl Sagan, nuking the moon, and not nuking the moon

In 1957, Nobel laureate microbiologist Joshua Lederberg and biostatician J. B. S. Haldane sat down together imagined what would happened if the USSR decided to explode a nuclear weapon on the moon.

The Cold War was on, Sputnik had recently been launched, and the 40th anniversary of the Bolshevik Revolution was coming up – a good time for an awe-inspiring political statement. Maybe they read a recent United Press article about the rumored USSR plans. Nuking the moon would make a powerful political statement on earth, but the radiation and disruption could permanently harm scientific research on the moon.

What Lederberg and Haldane did not know was that they were onto something – by the next year, the USSR really investigated the possibility of dropping a nuke on the moon. They called it “Project E-4,” one of a series of possible lunar missions.

What Lederberg and Haldane definitely did not know was that that same next year, 1958, the US would also study the idea of nuking the moon. They called it “Project A119” and the Air Force commissioned research on it from Leonard Reiffel, a regular military collaborator and physicist at the University of Illinois. He worked with several other scientists, including a University of Chicago grad student named Carl Sagan.

“Why would anyone think it was a good idea to nuke the moon?”

That’s a great question. Most of us go about our lives comforted by the thought “I would never drop a nuclear weapon on the moon.” The truth is that given a lot of power, a nuclear weapon, and a lot of extremely specific circumstances, we too might find ourselves thinking “I should nuke the moon.”

Reasons to nuke the moon

During the Cold War, dropping a nuclear weapon on the moon would show that you had the rocketry needed to aim a nuclear weapon precisely at long distances. It would show off your spacefaring capability. A visible show could reassure your own side and frighten your enemies.

It could do the same things for public opinion that putting a man on the moon ultimately did. But it’s easier and cheaper:

  • As of the dawn of ICBMs you already have long-distance rockets designed to hold nuclear weapons
  • Nuclear weapons do not require “breathable atmosphere” or “water”
  • You do not have to bring the nuclear weapon safely back from the moon.

There’s not a lot of English-language information online about the USSR E-4 program to nuke the moon. The main reason they cite is wanting to prove that USSR rockets could hit the moon.3 The nuclear weapon attached wasn’t even the main point! That explosion would just be the convenient visual proof.

They probably had more reasons, or at least more nuance to that one reason – again, there’s not a lot of information accessible to me.* We have more information on the US plan, which was declassified in 1990, and probably some of the motivations for the US plan were also considered by the USSR for theirs.

  • Military
  • Scare USSR
  • Demonstrate nuclear deterrent1
    • Results would be educational for doing space warfare in the future2
  • Political
    • Reassure US people of US space capabilities (which were in doubt after the USSR launched Sputnik)
      • More specifically, that we have a nuclear deterrent1
    • “A demonstration of advanced technological capability”2
  • Scientific (they were going to send up batteries of instruments somewhat before the nuking, stationed at distances from the nuke site)
    • Determine thermal conductivity from measuring rate of cooling (post-nuking) (especially of below-dust moon material)
    • Understand moon seismology better via via seismograph-type readings from various points at distance from the explosion
      • And especially get some sense of the physical properties of the core of the moon2
MANY PROBLEMS, ONE SOLUTION: BLOW UP THE MOON
As stated by this now-unavailable A Softer World merch shirt design. Hey, Joey Comeau and Emily Horne, if you read this, bring back this t-shirt! I will buy it.

Reasons to not nuke the moon

In the USSR, Aleksandr Zheleznyakov, a Russian rocket engineer, explained some reasons the USSR did not go forward with their project:

  • Nuke might miss the moon
    • and fall back to earth, where it would detonate, because of the planned design which would explode upon impact
      • in the USSR
      • in the non-USSR (causing international incident)
    • and circle sadly around the sun forever
  • You would have to tell foreign observatories to watch the moon at a specific time and place
    • And… they didn’t know how to diplomatically do that? Or how to contact them?

The US has less information. While they were not necessarily using the same sea-mine style detonation system that the planned USSR moon-nuke would have3, they were still concerned about a failed launch resulting in not just a loose rocket but a loose nuclear weapon crashing to earth.2

(I mean, not that that’s never happened before.)

Even in his commissioned report exploring the feasibility, Leonard Reiffel and his team clearly did not want to nuke the moon. They outline several reasons this would be bad news for science:

  • Environmental disturbances
  • Permanently disrupting possible organisms and ecosystems
    • In maybe the strongest language in the piece, they describe this as “an unparalleled scientific disaster”
  • Radiological contamination
    • There are some interesting things to be done with detecting subtle moon radiation – effects of cosmic rays hitting it, detecting a magnetosphere, various things like the age of the moon. Nuking the moon would easily spread radiation all over it. It wouldn’t ruin our ability to study this, especially if we had some baseline instrument readings up there first, but it wouldn’t help either.
  • To achieve the scientific objective of understanding moon seismology, we could also just put detectors on the moon and wait. If we needed more force, we could just hit the moon with rockets, or wait for meteor impacts.

I would also like to posit that nuking the moon is kind of an “are we the baddies?” moment, and maybe someone realized that somewhere in there.

Please don't do that :(

Afterwards

That afternoon they imagined the USSR nuking the moon, Lederberg and Haldane ran the numbers and guessed that a nuclear explosion on the moon would be visible from earth. So the USSR’s incentive was there. They couldn’t do much about that but they figured this would be politically feasible, and that this was frightening because such a contamination would disrupt and scatter debris all over the unexplored surface of the moon – the closest and richest site for space research, a whole mini-planet of celestial material that had not passed through the destructive gauntlet of earth’s atmosphere (as meteors do, the force of reentry blasting away temperature-sensitive and delicate structures).

Lederberg couldn’t stop the USSR from nuking the moon. But early in the space age, he began lobbying for avoiding contaminating outer space. He pushed for a research-based approach and international cooperation, back when cooperating with the USSR was not generally on the table. His interest and scientific clout lead colleagues to take this seriously. We still do this – we still sanitize outgoing spacecraft so that hardy Earth organisms will (hopefully) not colonize other planets.

A rocket taking earth organisms into outer space is forward contamination.

Lederberg then took some further steps and realized that if there was a chance Earth organisms could disrupt or colonize Moon life, there was a smaller but deadlier chance that Moon organisms could disrupt or colonize Earth life.

A rocket carrying alien organisms from other planets to earth is back contamination.

He realized that in returning space material to earth, we should proceed very, very cautiously until we can prove that it is lifeless. His efforts were instrumental in causing the Apollo program to have an extensive biosecurity and contamination-reduction program. That program is its own absolutely fascinating story.

Early on, a promising young astrophysicist joined Lederberg in A) pioneering the field of astrobiology and B) raising awareness of space contamination – former A119 contributor and future space advocate Carl Sagan.

Here’s what I think happened: a PhD student fascinated with space works on secret project that he’d worked on with his PhD advisor on nuking the moon. He assists with this work, finding it plausible, and is horrified for the future of space research. Stumbling out of this secret program, he learns about a renowned scientist (Joshua Lederberg) calling loudly for care in space contamination.

Sagan perhaps learns, upon further interactions, that Lederberg came to this fear after considering the idea that our enemies would detonate a nuclear bomb on the moon as a political show.

Why, yes, Sagan thinks. What if someone were foolish enough to detonate a nuclear bomb on the moon? What absolute madmen would do that? Imagine that. Well, it would be terrible for space research. Let’s try and stop anybody from ever doing that that.

A panel from Homestuck of Dave blasting off into space on a jetpack, with Carl Sagan's face imposed over it. Captioned "THIS IS STUPID"
Artist’s rendition. || Apologies to, inexplicably, both Homestuck and Carl Sagan.

And if it helps, he made it! Over fifty years later and nobody thinks about nuking the moon very often anymore. Good job, Sagan.

This is just speculation. But I think it’s plausible.

If you like my work and want to help me out, consider checking out my Patreon! Thanks.

References

* We have, like, the personal website of a USSR rocket scientist – reference 3 below – which is pretty good.

But then we also have an interview that might have been done by journalist Adam Tanner with Russian rocket scientist Boris Chertok, and published by Reuters in 1999. I found this on an archived page from the Independent Online, a paper that syndicated with Reuters, where it was uploaded in 2012. I emailed Reuters and they did not have the interview in their archives, but they did have a photograph taken of Chertok from that day, so I’m wondering if they published the article but simply didn’t properly archive it later, and if the Independent Online is the syndicated publication that digitized this piece. (And then later deleted it, since only the Internet Archived copy exists now.) I sent a message to who I believe is the same Adam Tanner who would have done this interviewee, but haven’t gotten a response. If you have any way of verifying this piece, please reach out.

1: Associated Press, as found in the LA Times Archive, “U.S. Weighed A-Blast on Moon in 1950s.” 2008 May 18. https://www.latimes.com/archives/la-xpm-2000-may-18-mn-31395-story.html

2. Project A119, “A Study of Lunar Research Flights”, 1959 June 15. Declassified report: https://archive.org/details/DTIC_AD0425380

This is an extraordinary piece to read. I don’t think I’ve ever read a report where a scientist so earnestly explores a proposal and tries to solve various technical questions around it, and clearly does not want the proposal to go forward. For instance:

It is not certain how much seismic energy will be coupled into the
moon by an explosion near its surface,
hence one may develop an argument
that a large explosion would help ensure success of a first seismic experiment. On the other hand, if one wished to proceed at a more leisurely pace, seismographs could be emplaced upon the moon and the nature of possible interferences determined before selection of the explosive device. Such a course would appear to be the obvious one to pursue from a purely scientific viewpoint.

3. Aleksandr Zheleznyakov, translated by Sven Grahm, updated 1999 or so. “The E-4 project – exploding a nuclear bomb on the Moon.” http://www.svengrahn.pp.se/histind/E3/E3orig.htm

Crossposted to LessWrong.

A 1500s illustration of three Aztec people with fancy food dishes in front of them.

Book review: Cuisine and Empire

[Header: Illustration of meal in 1500s Mexico from the Florentine Codex.]

People began cooking our food maybe two million years ago and have not stopped since. Cooking is almost a cultural universal. Bits of raw fruit or leaves or flesh are a rare occasional treat or garnish – we prefer our meals transformed. There are other millennia-old procedures we do to make raw ingredients into cooking: separating parts, drying, soaking, slicing, grinding, freezing, fermenting. We do all of this for good reason: Cooking makes food more calorically efficient and less dangerous. Other techniques contribute to this, or help preserve food over time. Also, it tastes good.

Cuisine and Empire by Rachel Laudan is an overview of human history by major cuisines – the kind of things people cooked and ate. It is not trying to be a history of cultures, agriculture, or nutrition, although it touches on all of these things incidentally, as well as some histories of things you might not expect, like identity and technology and philosophy.

Grains (plant seeds) and roots were the staples of most cuisines. They’re relatively calorically dense, storeable, and grow within a season.

  • Remote islands really had to make do with whatever early colonists brought with them. Not only did pre-Columbian Hawaii not have metal, they didn’t have clay to make pots with! They cooked stuff in pits.

Running in the background throughout a lot of this is the clock of domestication – with enough time and enough breeding you can make some really naturally-digestible varieties out of something you’d initially have to process to within an inch of its life. It takes time, quantity, and ideally knowledge and the ability to experiment with different strains to get better breeds.

Potatoes came out of the Andes and were eaten alongside quinoa. Early potato cuisines didn’t seem to eat a lot of whole or cut-up potatoes – they processed the shit out of them, chopping, drying or freeze-drying them, soaking them, reconstituting them. They had to do a lot of these because the potatoes weren’t as consumer-friendly as modern breeds – less digestible composition, more phytotoxins, etc.

As cities and societies caught on, so did wealth. Wealthy people all around the world started making “high cuisines” of highly-processed, calorically dense, tasty, rare, and fancifully prepared ingredients. Meat and oil and sweeteners and spices and alcohol and sauces. Palace cooks came together and developed elaborate philosophical and nutritional theories to declare what was good to eat.

Things people nigh-universally like to eat:

  • Salt
  • Fat
  • Sugar
  • Starch
  • Sauces
  • Finely-ground or processed things
  • A variety of flavors, textures, options, etc
  • Meat
  • Drugs
    • Alcohol
    • Stimulants (chocolate, caffeine, tea, etc)
  • Things they believe are healthy
  • Things they believe are high-class
  • Pure or uncontaminated things (both morally and from, like, lead)

All people like these things, and low cuisines were not devoid of joy, but these properties showed up way more in high cuisines than low cuisines. Low cuisines tended to be a lot of grain or tubers and bits of whatever cooked or pickled vegetables or meat (often wild-caught, like fish or game) could be scrounged up.

In the classic way that oppressive social structures become self-reinforcing, rich people generally thought that rich people were better-off eating this kind of diet – carefully balanced – whereas it wasn’t just necessary, it was good for the poor to eat meager, boring foods. They were physically built for that. Eating a wealthy diet would harm them.

In lots of early civilizations, food and sacrifice of food was an important part of religion. Gods were attracted by offered meals or meat and good smells, and blessed harvests. There were gods of bread and corn and rice.

One thing I appreciate about this book is that it doesn’t just care about the intricate high cuisines, even if they were doing the most cooking, the most philosophizing about cooking, and the most recordkeeping. Laudan does her best to pay at least as much attention to what the 90+% of regular people were eating all of the time.


Here’s a great passage on feasts in Ancient Greece, at the Temple of Zeus in Olympia, at the start of each Olympic games (~400 BCE):

On the altar, ash from years of sacrifice, held together with water from the nearby River Alpheus, towered twenty feet into the air. One by one, a hundred oxen, draped with garlands, raised especially for the event and without marks of the plow, were led to the altar. The priest washed his hands in clear water in special metal vessels, poured out libations of wide, and sprinkled the animals with cold water or with grain to make them shake their heads as if consenting to their death. The onlookers raised their right arms to the altar. Than the priest stunned the lead ox with a blow to the base to the neck, thrust in the knife, and let the blood spill into a bowl held by a second priest. The killing would have gone on all day, even if each act took only five minutes.

Assistants dragged each felled ox to one side to be skinned and butchered. For the assembled crowd, cooks began grilling strips of beef, boiling bones in cauldron, baking barley bannocks, and stacking up amphorae of wine. For the sacrifice, fat and leg and thigh bones rich in life-giving marrow were thrown on a fire of fragrant poplar branches, and the entrails were grilled. Symbolizing union, two or three priests bit together into each length of intestines. The bones whitened and crumbled; the fragrant smoke rose to the god.

Ancient Greek farmers had thin soil and couldn’t do much in the way of deliberate irrigation, so their food supply was more unpredictable than other places.

Country people kept a three-year supply of grain to protect against harvest failure and a four-year supply of oil. 

That’s so much!

That poor soil is also why the olive tree was relied on for oil instead of grains, which had better yields and took way less time to reach producing age. You could grow olive trees in places you couldn’t farm grain. And now we all know and love the oil from this tree. A tree is a wild place to get oil from! Similar story for grapevines.

  • The Spartans really liked this specific pork and blood soup called “black broth”.

This book was a fun read, on top of the cool history. Laudan has a straightforward listful way of describing cuisines that really puts me in the mind of a Redwall or a George R. R. Martin feast description.

A royal meal in the Indian Mauryan Empire (circa 300 BCE or so):

For court meals, the meat was tempered with spices and condiments to correct its hot, dry nature and accompanied by the sauces of high cuisine. Buffalo calf spit-roasted over charcoal and basted with ghee was served with sour tamarind and pomegranate sauces. Haunch of venison was simmered with sour mango and pungent and aromatic spices. Buffalo calf steaks were fried in ghee and seasoned with sour fruit, rock salt, and fragrant leaves. Meat was ground, formed into patties, balls, or sausage shapes, and fried, or it was sliced, dried to jerky, and then toasted.

Or in around 600 CE, Mexican Teotihuacan eating:

To maize tamales or tortillas were added stews of domestic turkeys and dogs, and deer, rabbits, ducks, small birds, iguanas, fish, frog, and insects caught in the wild. Sauces were made with basalt pestles and mortars that were used to shear fresh green or dried and rehydrated red chiles, resulting in a vegetable puree that was thickened with tomatillos (Physalis philadelphica) or squash seeds. Beans, simply simmered in water, provided a tasty side dish. For the nobles, there were gourd bowls of foaming chocolate, seasoned with annatto and chili.

I’m a vegetarian who has no palette for spice and now all I can think about is eating dog stew made with sheared fresh green chiles and plain beans.

Be careful about reading this book while broke on an airplane. You will try to convince yourself this is all academic and that you’re not that curious about what iguana meat tastes like. You’ll lose that internal battle. Then, in desperation, your brain will start in on a new phase. You’ll tell yourself as you scrape the last of your bag of traveler’s food – walnut meat, dried grapes, and pieces of sweet chocolate – that you wait to be brought a complimentary snack of baked wheat crackers flavored with salt, and a cup of hot coffee with cow’s milk, sweetened with cane sugar, and also that this is happening while you are flying. In this moment, you will be enlightened.


Grindstones are very important throughout history. A lot of cultures used hand grindstones at first and worked into water or animal-driven mills later. You grind grain to get flour, but you also grind things to get oil, spices, a different consistency of root, etc. You spent a lot of time grinding grain. There are a million kinds of hand grindstone. Some are still used today. When Roman soldiers marched around continents, they brought with them a relatively efficient rotary grindstone. They used mules to carry one 60-pound grindstone per 8 people. Every day, a soldier would grind for an hour and a half to feed the eight people. The grain would be stolen from storehouses conquered along the way.


Chapter 3 on Buddhist cuisines throughout Asia was especially great. Buddhism spread as sort of a reaction to the high sacrificial meat-n-grain cuisine of the time – a religious asceticism that really caught on. Ashoka spread it in India in 250 BCE, and slowly over centuries seeped into China. Buddhists did not kill animals (mostly) nor drink alcohol, and ate a lot of rice. White rice, sugar, and dairy spread through Asia. In both China and India, as the rich got into it, Buddhism became its own new high cuisine: rare vegetables, sugar, ghee and other dairy, tea, and elaborate vegetarian dishes. So much for asceticism!

There is an extensive history of East Asian tofu and gluten-based meat substitutes that largely came out of vegetarian Buddhist influence. A couple 1100s and 1200s CE Chinese cookbooks are purely vegetarian and have recipes for things like mock lung (you know, like a mock hamburger or mock chicken, but if you’re missing the taste of lung.) (You might be interested in modern adaptations from Robban Toleno.)

Diets often go with religion. It’s a classic way to divide culture, and also, food and philosophy and ideas about health have always gone hand in hand in hand. Islamic empires spread cuisine over the middle east. Christian empires brought their own food with them to other parts of the world.

A lot of early cuisines in Europe, the Middle East, India, Asia, and Mesoamerica were based on correspondences between types of food and elements and metaphysical ideas. You would try to reach balance. In Europe in the 1500s, during the Enlightenment, these old incorrect ideas about nutrition were replaced with bold new incorrect ideas about nutrition. Instead of corresponding to four elements, food was actually made of three chemical elements: salt, oil, and vapor. The Dutch visionary Paracelsus who thought chemistry could be based on the bible and was a century later called a “master at murdering folk with chemistry”.

Fermenting took on its own magic:

Paracelsus suggested that “ferment” was spiritual, reinterpreting the links between the divine and bread in terms of his Protestant chemistry. When ferment combined with matter (massa in Latin, significantly also the word for bread dough), it multiplied. If this seems abstract, considered what happened in bread making. Bakers used a ferment or leaven[…] and kneaded it with flour and water. A few hours later, the risen dough was full of bubbles, or spirit. Ferment, close to the soul itself, turned lifeless stuff into vibrant, living bodies filled with spirit. The supreme example of ferment was Christ, described by the chemical physicians as fermentum, “the food of the soul.”

Again, cannot stress enough that the details of this food cosmology still got most things wrong. But I think they weren’t far off with this one.

There was an article I had bookmarked years ago about the very early days of microbiology and how many people interpreted this idea of tiny animalcules found in sexual fluid and sperm as literal demons. Does anyone know about this? I feel like these dovetail very nicely in a history of microbiological theology.


Corn really caught on in the 1800s as a food for the poor in East and Central Africa, Italy, Japan, India, and China. I don’t really know how this happened. I assume it grew better in some climates than native grains, like potatoes did in Europe?

Corn cuisine in the Americas knew to treat the corn with lye to release more of its nutrients, kill toxins, and make it taste better. This is called nixtamalization. When corn spread to Eurasia, it was grown widely, but nixtamalization didn’t make it over. The Eurasian eaters had to get those nutrients from elsewhere. They still ate corn, but it was a worse time!

  • In Iceland, where no crops would grow, people would use dried fish called “stockfish” and spread sheep butter on it and eat it instead of bread.

Caloric efficiency was a fun recurring theme. See again, the slow adoption of the potato into Europe. Cuisine has never been about maximizing efficiency. Once bare survival is assured, people want to eat what they know and what has high status in their minds.

I think this is a statement about the feedback cycles of individual people, for instance, subsistence farmers. Suppose you’re a Polish peasant in 1700 and you struggle by year by year growing wheat and rye. But this year you have access to potatoes, a food you somewhat mistrust. You might trust it enough to eat a cooked potato handed to you if you were starving – but when you make decisions about what to plant for a year, you will be reluctant to commit to you and your family to a diet of a possibly-poisonous food (or to a failed crop – you don’t know growing potatoes either). Even if it’s looking like a dry year – especially if it’s looking like a dry year! – you know wheat and rye. You trust wheat and rye. You’ve made it through a lean year of wheat and rye before. You’ll do it again.

People are reluctant to give up their staple crops but they will supplant them. Barley was solidly replaced by the somewhat-more-efficient wheat throughout Europe, millet by rice and wheat in China. But we settled on some ones we like: 

The staples that humans had picked out centuries before 1000 B.C.E. still provide most of the world’s human food calories. Only sugarcane, in the form of sugar, was to join them as a major food source.

Around 1650 in Europe, protestant-derived French cuisine overtook high Catholic cuisine as the main food of the European aristocracy.

Catholic cuisineFrench cuisine
Roasts
Fancy pies
Pottage
Cold foods are bad for you
Fasting dishes
Lard
Pastry
Fancy sauces
Boullions and extracts
Raw salads
Focus on vegetables
Butter

Slowly coming up in more recent times, say the 1700s, was a very slow equalizing in society: 

As more nations followed the Dutch and British in locating the source of rulers’ legitimacy not in hereditary or divine rights but in some form of consent or expression of the will of the people, it became increasingly difficult to deny to all citizens the right to eat the same kind of food.

After the French revolution, high French cuisine was almost canceled in France. Everyone should eat as equals, even if the food was potatoes! Fortunately Unfortunately As it happened, Napoleon came in after not too long and imperial high cuisine was back on a very small number of menus.

Speaking of potatoes and self-governance:

The only place where potatoes were adopted with enthusiasm was in distant [from Europe] New Zealand. The Maoris, accustomed to the subtropical roots that they had introduced to the North Island, welcomed them when introduced by Europeans in the 1770s because they grew in the colder South Island. Trading potatoes for muskets with European whalers and sealers enabled the Maoris to resist the British army from the 1840s to the 1870s.

Meanwhile, in Europe: Hey, we’re back to meat and grain! Britain really prized itself on beef and attributed the strength of its empire to beef. Even colonized peoples were like “whoa, maybe that beef and bread they’re eating really is making them that strong, we should try that.” Here’s a 1900 ad for beef extract that aged poorly:

[Source of this version. The brand of beef extract is spelled out of British colonies.]

That said, I did enjoy Laudan’s defense of British food. Starting in 1800, the British Empire was well underway, and what we now think of as stereotypical British cuisine was developing. It was heavy in sugar and sweets, white bread, beef, and prepared food. During the early industrial revolution, food and nutrition and the standard of living went down, but by the 1850s, all of it really came back.

It is worth noting that few cuisines have been so roundly condemned as nutritional and gastronomical disasters as British cuisine.

But Laudan points out that this food was not the aristocrat food (they were still eating French cuisine). It was the food of the working city poor. This is the rise of the “middling cuisines”, a true alternative between fancy high cuisine of a truly tiny percent of society and humble cuisine of peasants who often faced starvation. For once, they had enough to eat. This was new.

After discussing the various ways in which the diet may have been bland or unappealing compared to neighboring cuisines –

Nonetheless, from the perspective of the urban salaried and working classes, the cuisine was just what they had wished for over the centuries: white bread, white sugar, meat, and tea. A century earlier, not only were these luxuries for much of the British population, but the humble were being encouraged to depend on potatoes, not bread, a real comedown in a society in which some kind of bread, albeit a coarse one, had been central to well-being for centuries. Now all could enjoy foodstuffs that had been the privilege of the aristocracy just a few generations earlier. Indeed, the meal called tea came close to being a true national cuisine. Even though tea retained traces of class distinctions, with snobberies about how teacups should be held, or whether milk or tea should be put into the cup first, everyone in the country, from the royal family, who were painted taking tea, to the family of a textile worker in the industrial north of the country, could sit down to white bread sandwiches or toast, jam, small cakes, and an iced sponge cake as a centerpiece. They could afford the tea that accompanied the meal. Set out on the table, tea echoed the grand buffets of eighteenth-century French high cuisine. [...] What seemed like culinary decline to those Britons who had always dined on high or bourgeois cuisine was a vast improvement to those enjoying those ampler and more varied cuisines for the first time.

[...]

Although to this day food continues to be used to reinforce minor differences in status, the hierarchical culinary philosophy of ancient and traditional cuisines was giving way to the more egalitarian culinary philosophy of modern cuisines.

A lot of this was facilitated by imperialism and/or outright slavery. The tea itself, for instance. But Britain was also deeply industrialized. Increased crop productivity, urbanization, and industrial processing were also making Britain’s home-grown food – wheat, meat – cheaper too. Or bringing these processes home. At the start of this period, sugar had been grown and harvested by slaves to feed Europe’s appetites, but in 1800, Prussian inventors figured out how to make sugar at scale from beets. 

The work was done by men paid salaries or wages, not by slaves or indentured laborers. The sugar was produced in northern Europe, not in tropical colonies. And the price was one all Europeans could afford. 

This was the sugar the British were eating then. Industrialization offered factory production of foods, canning, wildly cheap salt, and refrigeration.

We’re reaching the modern age, where the empires have shrunk and most people get enough calories and have access to industrially-cheap food and the fruits of global trade. Laudan discusses at length the hamburger and instant ramen – wheat flour, fat, meat or meat flavor, low price, and convenience. New theories of nutrition developed and we definitely got them right this time. The empires break up and worldwide leaders take pride in local cuisines, manufacturing a sense of identity through food if needed. Most people have the option of some dietary diversity and a middling cuisine. Go back to that list of things people like to eat. Most of us have that now! Nice!

  • Nigeria is the biggest importer of Norwegian stockfish. It caught on as a relief food delivered during Nigeria’s Biafran civil war. Here’s a 1960s photo of a Nigerian guy posing in a Bergen stockfish warehouse.

Aw, wait, is this a book review? Book review: Great stuff. There’s a lot of fascinating stuff not included in this summary. I wish it had more on Africa but I did like all the stuff about Eurasia that was in there. I feel like there are a few cultures with really really meat heavy cuisines – like Saami or Inuit cuisine – that could have been at least touched on. But also those aren’t like major cuisines and I can just learn about those on my own. Overall I appreciated the unwavering sense of compassion and evenhandedness – discussing cuisines and falsified theories of nutrition without casting judgment. Everyone’s just trying to eat dinner.

Rachel Laudan also has a blog. It looks really cool.

Cuisine and Empire by Rachel Laudan

The book is “Cuisine and Empire” by Rachel Laudan, 2012. h/t my friend A for the recommendation.


More food history from Eukaryote Writes Blog: Triptych in Global Agriculture.

If you want to support my work by chucking me a few bucks per post, check out my Patreon!

Defending against hypothetical moon life during Apollo 11

[Header image: Photo of the lunar lander taken during Apollo 11.]

In 1969, after successfully bringing men back from landing on the moon, the astronauts, spacecraft, and all the samples from the moon surface were quarantined for 21 days. This was to account for the possibility that they were carrying hostile moon germs. Once the quarantine was up and the astronauts were not sick, and extensive biological testing on them and the samples showed no signs of infection or unexpected life, the astronauts were released.

We know now that the moon is sterile. We didn’t always know this. That was one of the things we hoped to find out from the Apollo 11 program, which was the first time not only that people would visit another celestial body, but that material from another celestial body would be brought back in a relatively pristine fashion to earth. The possibilities were huge.

The possibilities included life, although nobody thought this was especially likely. But in that slim chance of life, there was a chance that life would be harmful to humans or the earth environment. Human history is full of organisms wrecking havoc when introduced to a new location – smallpox in the Americas, rats in Pacific Islands, water hyacinth outside of South America. What if there were microbes on the moon? Even if there was a tiny chance, wouldn’t it be worth taking careful measures to avoid the risk of an unknown and irreversible change to the biosphere?

NASA, Congress, and various other federal agencies were apparently convinced to spend millions of dollars building an extensive new facility and take extensive other measures to address this possibility.

This is how a completely abstract argument about alien germs was taken seriously and mitigated at great effort and expense during the 1969 Apollo landing.

Continue reading
An old knit tube with colorful stripes

Who invented knitting? The plot thickens

Last time on Eukaryote Writes Blog: You learned about knitting history.

You thought you were done learning about knitting history? You fool. You buffoon. I wanted to double check some things in the last post and found out that the origins of knitting are even weirder than I guessed.

Humans have been wearing clothes to hide our sinful sinful bodies from each other for maybe about 20,000 years. To make clothes, you need cloth. One way to make cloth is animal skin or membrane, that is, leather. If you want to use it in any complicated or efficient way, you also need some way to sew that – very thin strips of leather, or taking sinew or plant fiber and spinning it into thread. Also popular since very early on is taking that thread, and turning it into cloth. There are a few ways to do this.

A drawing showing loose fiber, which turns into twisted thread, which is arranged in various ways to make different kinds of fabric structures. Depicted are the structures for: naalbound, woven, knit, looped, and twined fabric.
By the way, I’m going to be referring to “thread” and “yarn” interchangeably from here on out. Don’t worry about it.

(Can you just sort of smush the fiber into cloth without making it into thread? Yes. This is called felting. How well it works depends on the material properties of the fiber. A lot of traditional Pacific Island cloth was felted from tree bark.)

Now with all of these, you could probably make some kind of cloth by taking threads and, by hand, shaping them into these different structures. But that sounds exhausting and nobody did that. Let’s get tools involved. These different structures correspond to some different kind of manufacturing technique.

By far, the most popular way of making cloth is weaving. Everyone has been weaving for tens of thousands of years. It’s not quite a cultural universal but it’s damn close. To weave, you need a loom.1 There are ten million kinds of loom. Most primitive looms can make a piece of cloth that is, at most, the size of the loom. So if you want to make a tunic that’s three feet wide and four feet long, you need cloth that’s at least three feet wide and four feet long, and thus, a loom that’s at least three feet wide and four feet long. You can see how weaving was often a stationary affair.

Recap

Here’s what I said in the last post: Knitting is interesting because the manufacturing process is pretty simple, needs simple tools, and is portable. The final result is also warm and stretchy, and can be made in various shapes (not just flat sheets). And yet, it was invented fairly recently in human history.

I mostly stand by what I said in the last post. But since then I’ve found some incredible resources, particularly the scholarly blogs Loopholes by Cary “stringbed” Karp and Nalbound by Anne Marie Deckerson, which have sent me down new rabbit-holes. The Egyptian knit socks I outlined in the last post sure do seem to be the first known knit garments, like, a piece of clothing that is meant to cover your body. They’re certainly the first known ones that take advantage of knitting’s unique properties: of being stretchy, of being manufacturable in arbitrary shapes. The earliest knitting is… weirder.

SCA websites

Quick sidenote – I got into knitting because, in grad school, I decided that in the interests of well-roundedness and my ocular health, I needed hobbies that didn’t involve reading research papers. (You can see how far I got with that). So I did two things: I started playing the autoharp, and I learned how to knit. Then, I was interested in the overlap between nerds and handicrafts, so a friend in the Society for Creative Anachronism pitched me on it and took me to a coronation. I was hooked. The SCA covers “the medieval period”; usually, 1000 CE through 1600 CE.

I first got into the history of knitting because I was checking if knitting counted as a medieval period art form. I was surprised to find that the answer was “yes, but barely.” As I kept looking, a lot of the really good literature and analysis – especially experimental archaeology – came out of blogs of people who were into it as a hobby, or perhaps as a lifestyle that had turned into a job like historical reenactment. This included a lot of people in the SCA, who had gone into these depths before and just wrote down what they found and published it for someone else to find. It’s a really lovely knowledge tradition to find one’s self a part of.

Aren’t you forgetting sprang?

There’s an ancient technique that gets some of the benefits of knitting, which I didn’t get to in the last post. It’s called sprang. Mechanically, it’s kind of like braiding. Like weaving, sprang requires a loom (the size of the cloth it produces) and makes a flat sheet. Like knitting, however, it’s stretchy.

Sprang shows up in lots of places – the oldest in 1400 BCE in Denmark, but also other places in Europe, plus (before colonization!): Egypt, the Middle East, centrals Asia, India, Peru, Wisconsin, and the North American Southwest. Here’s a video where re-enactor Sally Pointer makes a sprang hairnet with iron-age materials.

Despite being widespread, it was never a common way to make cloth – everyone was already weaving. The question of the hour is: Was it used to make socks?

Well, there were probably sprang leggings. Dagmar Drinkler has made historically-inspired sprang leggings, which demonstrate that sprang colorwork creates some of the intricate designs we see painted on Greek statues – like this 480 BCE Persian archer.

I haven’t found any attestations of historical sprang socks. The Sprang Lady has made some, but they’re either tube socks or have separately knitted soles.

Why weren’t there sprang socks? Why didn’t sprang, widespread as it is, take on the niche that knitting took?

I think there are two reasons. One, remember that a sock is a shaped-garment, tube-like, usually with a bend at the heel, and that like weaving, sprang makes a flat sheet. If you want another shape, you have to sew it in. It’s going to lose some stretch where it’s sewn at the seam. It’s just more steps and skills than knitting a sock.

The second reason is warmth. I’ve never done sprang myself – from what I can tell, it has more of a net-like openness upon manufacture, unlike knitting which comes with some depth to it. Even weaving can easily be made pretty dense simply by putting the threads close together. I think, overall, a sprang fabric garment made with primitive materials is going to be less warm than a knit garment made with primitive materials.

Those are my guesses. I bring it up merely to note that there was another thread → cloth technique that made stretchy things that didn’t catch on the same way knitting did. If you’re interested in sprang, I cannot recommend The Sprang Lady’s work highly enough.

Anyway, let’s get back to knitting.

Knitting looms

The whole thing about roman dodecahedrons being (hypothetically) used to knit glove fingers, described in the last post? I don’t think that was actually the intended purpose, for the reasons I described re: knitting wasn’t invented yet. But I will cop to the best argument in its favor, which is that you can knit with glove fingers with a roman dodecahedron.

“But how?” say those of you not deeply familiar with various fiber arts. “That’s not needles,” you say.

You got me there. This is a variant of a knitting loom. A knitting loom is a hoop with pegs to make knit tubes. This can be the basis of a knitting machine, but you can also knit on one on its own.. They make more consistent knit tubes with less required hand-eye coordination. (You can also make flat panels with them, especially a version called a knitting rake, but since all of the early knitting we’re talking about are tubes anyhow, let’s ignore that for the time being.)

Knitting on a modern knitting loom. || Photo from Cynthia M. Parker on flickr, under a CC BY-SA 2.0 license.

Knitting on a loom is also called spool knitting (because you can use a spool with nails in it as the loom for knitting a cord) and tomboy knitting (…okay). Structurally, I think this is also basically the same thing as lucet cord-making, so let’s go ahead and throw that in with this family of techniques. (The earliest lucets are from ~1000 CE Viking Sweden and perhaps medieval Viking Britain.)

The important thing to note is that loom knitting makes a result that is, structurally, knit. It’s difficult to tell whether a given piece is knit with a loom or needles, if you didn’t see it being made. But since it’s a different technique, different aspects become easier or harder.

A knitting loom sounds complicated but isn’t hard to make, is the thing. Once you have nails, you can make one easily by putting them in a wood ring. You could probably carve one from wood with primitive tools. Or forge one. So we have the question: Did knitting needles or knitting looms come first?

We actually have no idea. There aren’t objects that are really clearly knitting needles OR knitting looms until long after the earliest pieces of knitting. This strikes me as a little odd, since wood and especially metal should preserve better than fabric, but it’s what we’ve got. It’s probably not helped by the fact that knitting needles are basically just smooth straight sticks, and it’s hard to say that any smooth straight stick is conclusively a knitting needle (unless you find it with half a sock still on it.)

(At least one author, Isela Phelps, speculates that finger-knitting, which uses the fingers of one hand like a knitting loom and makes a chunky knit ribbon, came first – presumably because, well, it’s easier to start from no tools than to start from a specialized tool. This is possible, although the earliest knit objects are too fine and have too many stitches to have been finger-knit. The creators must have used tools.)

(stringbed also points out that a piece of whale baleen can be used as circular knitting needles, and that the relevant cultures did have access to and trade in whale parts. Although while we have no particular evidence that they were used as such, it does mean that humanity wouldn’t have to invent plastic before inventing the circular knitting needle, we could have had that since the prehistoric period. So, I don’t know, maybe it was whales.)

THE first knitting

The earliest knit objects we have… ugh. It’s not the Egyptian socks. It’s this.

Photo of an old, long, thin knit tube in lots of striped colors.
One of the oldest knit objects. || Photo from Musée du Louvre, AF 6027.

There are a pair of long, thin, colorful knit tubes, about an inch wide, a few feet long. They’re pretty similar to each other. Due to the problems inherent in time passing and the flow of knowledge, we know one of them is probably from Egypt, and was carbon-dated to 425-594 CE. The other quite similar tube, of a similar age, has not been carbon dated but is definitely from Egypt. (The original source text for this second artifact is in German, so I didn’t bother trying to find it, and instead refer to stringbed’s analysis. See also matthewpius guestblogging on Loopholes.) So between the two of them, we have a strong guess that these knit tubes were manufactured in Egypt around 425-594 CE, about 500 years before socks.

People think it was used as a belt.

This is wild to me. Knitting is stretchy, and I did make fun of those peasants in 1300 CE for not having elastic waistlines, so I could see a knitted belt being more comfortable than other kinds of belts.2 But not a lot better. A narrow knit belt isn’t going to be distribute most of the force onto the body too differently than a regular non-stretchy belt, and regular non-stretchy belts were already in great supply – woven, rope, leather, etc. Someone invented a whole new means of cloth manufacture and used it to make a thing that existed slightly differently.

Then, as far as I can tell, there are no knit objects in the known historical record for five hundred years until the Egyptian socks pop up.

Pulling objects out of the past is hard. Especially things made from cloth or animal fibers, which rot (as compared to metal, pottery, rocks, bones, which last so long that in the absence of other evidence, we name ancient cultures based on them.) But every now and then, we can. We’ve found older bodies and textiles preserved in ice and bogs and swamps.3 We have evidence of weaving looms and sewing needles and pictures of people spinning or weaving cloth and descriptions of them doing it, from before and after. I’m guessing that the technology just took a very long time to diversify beyond belts.

Speaking of which: how was the belt made? As mentioned, we don’t find anything until much later that is conclusively a knitting needle or a knitting loom. The belts are also, according to matthewpius on loopholes, made with a structure called double knitting. The effect is (as indicated by Pallia – another historic reenactor blog!) kind of hard to do with knitting needles in the way they achieved it, but pretty simple to do with a knitting loom.

(Another Egyptian knit tube belt from an unclear number of centuries later.)

Viking knitting

You think this is bad? Remember before how I said knitting was a way of manufacturing cloth, but that it was also definable as a specific structure of a thread, that could be made with different methods?

The oldest knit object in Europe might be a cup.

Photo of a richly decorated old silver cup.
The Ardagh Chalice. || Photo by Sailko under a CC BY-SA 3.0 license.

You gotta flip it over.

Another photo of the ornate chalice from the equally ornate bottom. Red arrows point to some intricate wire decorations around the rim.
Underside of the Ardagh Chalice. || Adapted from a Metroplitan Museum image.

Enhance.

Black and white zoom in on the wire decorations. It's more  clearly a knit structure.
Photo from Robert M. Organ’s 1963 article “Examination of the Ardagh Chalice-A Case History”, where they let some people take the cup apart and put it back together after.

That’s right, this decoration on the bottom of the Ardagh Chalice is knit from wire.
Another example is the decoration on the side of the Derrynaflen Paten, a plate made in 700 or 800 CE in Ireland. All the examples seem to be from churches, hidden by or from Vikings. Over the next few hundred years, there are some other objects in this technique. They’re tubes knitted from silver wire. “Wait, can you knit with wire?” Yes. Stringbed points out that knitting wire with needles or a knitting loom would be tough on the valuable silver wire – they could break or distort it.

Photo of an ornate silver plate with gold decorations. There are silver knit wire tubes around the edge.
The Derrynaflen Patten, zoomed in on the knit decorations at the end. || Adapted from this photo by Johnbod, under a CC BY-SA 3.0 license.

What would make sense to do it with is a little hook, like a crochet hook. But that would only work on wire – yarn doesn’t have the structural integrity to be knit with just a hook, you need to support each of the active loops.

So was the knit structure just invented separately by Viking silversmiths, before it spread to anyone else? I think it might have been. It’s just such a long time before we see knit cloth, and we have this other plausible story for how the cloth got there.

(I wondered if there was a connection between the Viking knitting and their sources of silver. Vikings did get their silver from the Islamic world, but as far as I can tell, mostly from Iran, which is pretty far from Egypt and doesn’t have an ancient knitting history – so I can’t find any connection there.)

The Egyptian socks

Let’s go back to those first knit garments (that aren’t belts), the Egyptian knit blue-and-white socks. There are maybe a few dozen of these, now found in museums around the world. They seem to have been pulled out of Egypt (people think Kustat) by various European/American collectors. People think that they were made around 1000-1300 AD. The socks are quite similar: knit, made of cotton, in white and 1-3 shades of indigo, geometric designs sometimes including Kufic characters.

I can’t find a specific origin location (than “probably Egypt, maybe Kustat?”) for any of them. The possible first sock mentioned in the last post is one of these – I don’t know if there are any particular reasons for thinking that sock is older than the others.

This one doesn’t seem to be knit OR naalbound. Anne Marie Decker at Nalbound.com thinks it’s crocheted and that the date is just completely wrong. To me, at least, this cast doubts on all the other dates of similar-looking socks.

That anomalous sock scared me. What if none of them had been carbon-dated? Oh my god, they’re probably all scams and knitting was invented in 1400 and I’m wrong about everything. But I was told in a historical knitting facebook group that at least one had been dated. I found the article, and a friend from a minecraft discord helped me out with an interlibrary loan. I was able to locate the publication where Antoine de Moor, Chris Verhecken-Lammens and Mark Van Strydonck did in fact carbon-date four ancient blue-and-white knit cotton socks and found that they dated back to approximately 1100 CE – with a 95% chance that they were made somewhere between 1062 and 1149 CE. Success!

Helpful research tip: for the few times when the SCA websites fail you, try your facebook groups and your minecraft discords.

Estonian mitten

Photo of a tattered old fragment of knitting. There are some colored designs on it in blue and red.
Yeah, this is all of it. Archeology is HARD. [Image from Anneke Lyffland’s writeup.]

Also, here’s a knit fragment of a mitten found in Estonia. (I don’t have the expertise or the mitten to determine it myself, but Anneke Lyffland (another SCA name), a scholar who studied one is aware of cross-knit-looped naalbinding – like the Peruvian knit-lookalikes mentioned in the last post – and doesn’t believe this was naalbound.) It was part of a burial that was dated from 1238 – 1299 CE. This is fascinating and does suggest a culture of knitted practical objects, in Eastern Europe, in this time period. This is the earliest East European non-sock knit fabric garment that I’m aware of.

But as far as I know, this is just the one mitten. I don’t know much about archaeology in the area and era, and can’t speculate as to whether this is evidence that knitting was rare or whether we have very few wool textiles from the area and it’s not that surprising. (The voice of shoulder-Thomas-Bayes says: Lots of things are evidence! Okay, I can’t speculate as to whether it’s strong evidence, are you happy, Reverend Bayes?) Then again, a bunch of speculation in this post is also based on two maybe-belts, so, oh well. Take this with salt.

By the way, remember when I said crochet was super-duper modern, like invented in the 1700s?

Literally a few days ago, who but the dream team of Cary “stringbed” Karp and Anne Marie Decker published an article in Archaeological Textiles Review identifying several ancient probably-Egyptian socks thought to be naalbound as being actually crocheted.

This comes down to the thing about fabric structures versus techniques. There’s a structure called slip stitch that can be either crocheted or naalbound. So since we know naalbinding is that old, so if you’re looking at an old garment and see slip stitch, maybe you say it was naalbound. But basically no fabric garment is just continuous structure all the way through. How do the edges work? How did it start and stop? Are there any pieces worked differently, like the turning of a heel or a cuff or a border? Those parts might be more clearly worked with crochet hook than a naalbinding needle. And indeed, that’s what Karp and Decker found. This might mean that those pieces are forgeries – no carbon dating. But it might mean crochet is much much older than previously thought.

My hypothesis

Knitting was invented sometime around or perhaps before 600 CE in Egypt.

From Egypt, it spreads to other Muslim regions.

It spread into Europe via one or more of these:

  1. Ordinary cultural diffusion northwards
  2. Islamic influence in the Iberian Peninsula
    • In 711 CE, Al-Andalus was conquered by the Umayyad Caliphate…
      • Kicking off a lot of Islamic presence in and control over the area up until 1400 CE or so…
  3. Meanwhile, starting in 1095 CE, the Latin Church called for armies to take Jerusalem from the Byzantines, kicking off the Crusades.
    • …Peppering Arabic influences into Europe, particularly France, over the next couple centuries.

… Also, the Vikings were there. They separately invented the knitting structure in wire, but never got around to trying it out in cloth, perhaps because the required technique was different.

Another possibility

Wrynne, AKA Baronness Rhiall of Wystandesdon (what did I say about SCA websites?), a woman who knows a thing or two about socks, believes that based on these plus the design of other historical knit socks, the route goes something like:

??? points to Iran, which points to: A. Eastern Europe, then to 1. Norway and Sweeden and 2. Russia. B. to ???, to Spain, to Western Europe.

I don’t know enough about socks to have an sophisticated opinion on her evidence, but the reasoning seems solid to me. For instance, as she explains, old Western European socks are knit from the cuff of the sock down, whereas old Middle Eastern and East European socks are knit from the toe of the sock up – which is also how Eastern and Northern European naalbound socks were shaped. Baronness Rhiall thinks Western Europe invented its sockmaking techniques independently based only having had a little experience with a few late 1200s/1300s knit pieces from Moorish artisans.

What about tools?

Here’s my best guess: The Egyptian tubes were made on knitting looms.

The viking tubes were invented separately, made with a metal hook as stringbed speculates, and never had any particular connection to knitting yarn.

At some point, in the Middle East, someone figured out knitting needles. The Egyptian socks and Estonian mitten and most other things were knit in the round on double-ended needles.

I don’t like this as an explanation, mostly because of how it posits 3 separate tools involved in the earliest knit structures – that seems overly complicated. But it’s what I’ve got.

Knitting in the tracks of naalbinding

I don’t know if this is anything, but here are some places we also find lots of naalbinding, beginning from well before the medieval period: Egypt. Oman. The UAE. Syria. Israel. Denmark. Norway. Sweden. Sort of the same path that we predict knitting traveled in.

I don’t know what I’m looking at here.

  • Maybe this isn’t real and this places just happen to preserve textiles better
  • Longstanding trade or migration routes between North Africa, the Middle East, and Eastern Europe?
  • Culture of innovation in fiber?
  • Maybe fiber is more abundant in these areas, and thus there was more affordance for experimenting. (See below.)

It might be a coincidence. But it’s an odd coincidence, if so.

Why did it take so long for someone to invent knitting?

This is the question I set out to answer in the initial post, but then it turned into a whole thing and I don’t think I ever actually answered my question. Very, very speculatively: I think knitting is just so complicated that it took thousands of years, and an environment rich in fiber innovation, for someone to invent and make use of the series of steps that is knitting.

Take this next argument with a saltshaker, but: my intuitions back this up. I have a good visual imagination. I can sort of “get” how a slip knot works. I get sewing. I understand weaving, I can boil it down in my mind to its constituents.

There are birds that do a form of sewing and a form of weaving. I don’t want to imply that if an animal can figure it out, it’s clearly obvious – I imagine I’d have a lot of trouble walking if I were thrown into the body of a centipede, and chimpanzees can drastically outperform humans on certain cognitive tasks – but I think, again, it’s evidence that it’s a simpler task in some sense.

Same with sprang. It’s not a process I’m familiar with, but watching Sally Pointer do it on a very primitive loom, I can see understand it and could probably do it now. Naalbinding – well, it’s knots, and given a needle and knowing how to make a knot, I think it’s pretty straightforward to tie a bunch of knots on top of each other to make fabric out of it.

But I’ve been knitting for quite a while now and have finished many projects, and I still can’t say I totally get how knitting works. I know there’s a series of interconnected loops, but how exactly they don’t fall apart? How the starting string turns into the final project? It’s not in my head. I only know the steps.

I think that if you erased my memory and handed me some simple tools, especially a loom, I could figure out how to make cloth by weaving. I think there’s also a good chance I could figure out sprang, and naalbinding. But I think that if you handed me knitting needles and string – even if you told me I was trying to get fabric made from a bunch of loops that are looped into each other – I’m not sure I would get to knitting.

(I do feel like I might have a shot at figuring out crochet, though, which is supposedly younger than any of these anyway, so maybe this whole line of thinking means nothing.)

Idle hands as the mother of invention?

Why do we innovate? Is necessity the mother of invention?

This whole story suggests not – or at least, that’s not the whole story. We have the first knit structures in belts (already existed in other forms) and decorative silver wire (strictly ornamental.) We have knit socks from Egypt, not a place known for demanding warm foot protection. What gives?

Elizabeth Wayland Barber says this isn’t just knitting – she points to the spinning jenny and the power loom, both innovations in yarn production in general, that were invented recently by men despite thousands of previous years of women producing yarn. In Women’s Work: The First 20,000 Years, she writes:

“Women of all but the top social and economic classes were so busy just trying to get through what had to be done each day that they didn’t have excess time or materials to experiment with new ways of doing things.”

This speculates a kind of different mechanism of invention – sure, you need a reason to come up with or at least follow up on a discovery, but you also need the space to play. 90% of everything is crap, you need to be really sure that you can throw away (or unravel, or afford the time to re-make) 900 crappy garments before you hit upon the sock.

Bill Bryson, in the introduction to his book At Home, writes about the phenomenon of clergy in the UK in 1700s and 1800s. To become an ordained minister, one needed a university degree, but not in any particular subject, and little ecclesiastical training. Duties were light; most ministers read a sermon out of a prepared book once a week and that was about it. They were paid in tithes from local landowners. Bryson writes:

“Though no one intended it, the effect was to create a class of well-educated, wealthy people who had immense amounts of time on their hands. In conesquence many of them began, quite spontaneously, to do remarkable things. Never in history have a group of people engaged in a broader range of creditable activities for which they were not in any sense actually employed.”

He describes some of the great amount of intellectual work that came out of this class, including not only the aforementioned power loom, but also: scientific descriptions of dinosaurs, the first Icelandic dictionary, Jack Russel terriers, submarines aerial photography, the study of archaeology, Malthusian traps, the telescope that discovered Uranus, werewolf novels, and – courtesy of the original Thomas Bayes – Bayes’ theorem.

I offhandedly posited a random per-person effect in the previous post – each individual has a chance of inventing knitting, so eventually someone will figure it out. There’s no way this can be the whole story. A person in a culture that doesn’t make clothes mostly out of thread, like the traditional Inuit (thread is used to sew clothes, but the clothes are very often sewn out of animal skin rather than woven fabric) seems really unlikely to invent knitting. They wouldn’t have lots of thread about to mess around with. So you need the people to have a degree of familiarity with the materials. You need some spare resources. Some kind of cultural lenience for doing something nonstandard.

…But is that the whole story? The Incan Empire was enormous, with 12,000,000 citizens at its height. They didn’t have a written language. They had the quipu system for recording numbers with knotted string, but they didn’t have a written language. (Their neighbors, the Mayans, did.) Easter Island, between its colonization by humans in 1000 CE and its worse colonization by Europeans in 1700 CE, had a maximum population of maybe 12,000. It’s one of the most remote islands in the world. In isolation from other societies, they did develop a written language, in fact Polynesia’s only native written language.

Color photo of a worn wooden tablet engraved with intricate Rongorongo characters.
One of ~26 surviving pieces of Rongorongo, the undeciphered written script of Easter Island. This is Text R, the “Small Washington tablet”. Photo from the Smithsonian Institution. (Image rotated to correspond with the correct reading order, as a courtesy to any Rongorongo readers in my audience. Also, if there are any Rongorongo readers in my audience, please reach out. How are you doing that?!)
A black and white photo of the same tablet. The lines of characters are labelled (e.g. Line 1, Line 2) and the  symbols are easier to see. Some look like stylized humans, animals, and plants.
The same tablet with the symbols slightly clearer. Image found on kohaumoto.org, a very cool Rongorongo resource.

I don’t know what to do with that.

Still. My rough model is:

A businessy chart labelled "Will a specific group make a specific innovation?" There are three groups of factors feeding into each other. First is Person Factors, with a picture of a person in a power wheelchair: Consists of [number of people] times [degree of familiarity with art]. Spare resources (material, time). And cultural support for innovation. Second is Discovery Factors, with a picture of a microscope: Consists of how hard the idea "is to have", benefits from discovery, and [technology required] - [existing technology]. ("Existing technology" in blue because that's technically a person factor.) Third is Special Sauce, with a picture of a wizard. Consists of: Survivorship Bias and The Easter Island Factor (???)

The concept of this chart amused me way too much not to put it in here. Sorry.

(“Survivorship bias” meaning: I think it’s safe to say that if your culture never developed (or lost) the art of sewing, the culture might well have died off. Manipulating thread and cloth is just so useful! Same with hunting, or fishing for a small island culture, etc.)

…What do you mean Loopholes has articles about the history of the autoharp?! My Renaissance man aspirations! Help!


Delightful: A collection of 1900’s forgeries of the Paracas textile. They’re crocheted rather than naalbound.

1 (Uh, usually. You can finger weave with just a stick or two to anchor some yarn to but it wasn’t widespread, possibly because it’s hard to make the cloth very wide.)

2 I had this whole thing ready to go about how a knit belt was ridiculous because a knit tube isn’t actually very stretchy “vertically” (or “warpwise”), and most of its stretch is “horizontal” (or “weftwise”). But then I grabbed a knit tube (fingerless glove) in my environment and measured it at rest and stretched, and it stretched about as far both ways. So I’m forced to consider that a knit belt might be reasonable thing to make for its stretchiness. Empiricism: try it yourself!

3 Fun fact: Plant-based fibers (cotton, linen, etc) are mostly made of carbohydrates. Animal-based fibers (silk, wool, alpaca, etc) and leather are mostly made of protein. Fens are wetlands that are alkaline and bogs are acidic. Carbohydrates decay in acidic bogs but are well-preserved in alkaline fens. Proteins dissolve in alkaline environments fens but last in acidic bogs. So it’s easier to find preserved animal material or fibers in bogs and preserved plant material or fibers in fens.


Cross-posted to LessWrong.

Fiber arts, mysterious dodecahedrons, and waiting on “Eureka!”

Part 1: The anomaly

This story starts, as many stories do, with my girlfriend 3D-printing me a supernatural artifact. Specifically, one of my favorite SCPs, SCP-184.

This attempt got about 75% of the way through. Close enough.

We had some problems with the print. Did the problems have anything to do with printing a model of a mysterious artifact that makes spaces bigger on the inside, via a small precisely-calibrated box? I would say no, there’s no way that be related.

Anyway, the image used for the SCP in question, and thus also the final printed model, is based a Roman dodecahedron. Roman dodecahedrons are a particular shape of metal object that have been dug up in digs from all over the Roman period, and we have no idea why they exist.

Roman dodecahedra. || Image source unknown.

Many theories have been advanced. You might have seen these in an image that was going around the internet, which ended by suggesting that the object would work perfectly for knitting the fingers of gloves.

There isn’t an alternative clear explanation for what these are. A tool for measuring coins? A ruler for calculating distances? A sort of Roman fidget spinner? This author thinks it displays a date and has a neat explanation as for why. (Experimental archaeology is so cool, y’all.)

Whatever the purpose of the Roman dodecahedron was, I’m pretty sure it’s not (as the meme implies is obvious) for knitting glove fingers.1

Why?

1: The holes are always all different sizes, and you don’t need that to make glove fingers.

2: You could just do this with a donut with pegs in it, you don’t need a precisely welded dodecahedron. It does work for knitting glove fingers, you just don’t need something this complicated.

3: The Romans hadn’t invented knitting.

Part 2: The Ancient Romans couldn’t knit

Wait, what? Yeah, the Romans couldn’t knit. The Ancient Greeks couldn’t knit, the Ancient Egyptians couldn’t knit. Knitting took a while to take off outside of the Middle East and the West, but still, almost all of the Imperial Chinese dynasties wouldn’t have known how. Knitting is a pretty recent invention, time-wise. The earliest knit objects we have are from Egypt around 1000 CE.

Possibly the oldest knit sock known, ca 1000-1200 CE according to this page. || Photo is public domain from the George Washington University Textile Museum Collection.

This is especially surprising because knitting is useful for two big reasons:

First, it’s very easy to do. It takes yarn and two sticks and children can learn how. This is pretty rare for fabric manufacturing – compare, for instance, weaving, which takes an entire loom.

Sidenote: Do you know your fabrics? This next section will make way more sense if you do.

Woven fabricKnit fabric
Commonly found in: trousers, collared/button up shirts, bedsheets, dish towels, woven boxers, quilts, coats, etc.
Not stretchy.
Loose threads won’t make the whole cloth unravel.
Commonly found in: T-shirts, polo shirts, leggings, underwear, anything made of jersey fabric, sweaters, sweatpants, socks.
Stretchy.
If you pull on a lose thread, the cloth unravels.

Second, and oft-underappreciated, knitted fabric is stretchy. We’re spoiled by the riches of elastic fabric today, but it wasn’t always so. Modern elastic fabric uses synthetic materials like spandex or neoprene; the older version was natural latex rubber, and it seems to have taken until the 1800s to use rubber to make clothing stretchy. Knit fabric stretches without any of those.

Before knitting, your options were limited – you could only make clothing that didn’t stretch, which I think explains a lot of why medieval and earlier clothing “looks that way”. A lot of belts and drapey fabric. If something is form-fitting, it’s probably laced. (…Or just more-closely tailored, which unrelatedly became more of a thing later in the medieval period.)

You think these men had access to comfortable elastic waistlines? No they did not. || Image from the Luttrell Psalter, ~1330.

You could also use woven fabric on the bias, which stretches a little.

Woven fabric is stretchier this way. Grab something made of woven fabric and try it out. || Image by PKM, under a CC BY-SA 3.0 license.

Medieval Europe made stockings from fabric cut like this. Imagine a sock made out of tablecloth or button-down-shirt-type material. Not very flexible. Here’s a modern recreation on Etsy.

Other kinds of old “socks” were more flexible but more obnoxious, made of a long strip of bias-cut fabric that you’d wrap around your feet. (Known as: winingas, vindingr, legwraps, wickelbänder , or puttees.) Historical reenactors wear these sometimes. I’m told they’re not flexible and restrict movement, and that they take practice to put on correctly.

Come 1000 CE, knitting arrives on the scene.

Which is to say, it’s no surprise that the first knitted garments we see are socks! They get big in Europe over the next 300 years or so. Richly detailed bags and cushions also appear. We start seeing artistic depictions of knitting for the first time around now too.

Italian Madonna knitting with four needles, ~1350. Section of this miniature by Tommaso de Modena.

Interestingly, this early knitting was largely circular, meaning that you produce a tube of cloth rather than a flat sheet. This meant that the first knitting was done not with two sticks and some yarn, but four sticks and some yarn. This is much easier for making socks and the like than using two needles would be. …But also means that the invention process actually started with four needles and some yarn, so maybe it’s not surprising it took so long.2

(Why did it take so long to invent knitting flat cloth with two sticks? Well, there’s less of a point to it, since you already have lots of woven cloth, and you can do a lot of clothes – socks, sweaters, hats, bags – by knitting tubes. Also, by knitting circularly, you only have to know how to do one stitch (the knit stitch) whereas flat knitting requires you also use a different stitch (the perl stitch) to make a smooth fabric that looks like and is as stretchy as round knitting. If you’re not a knitter, just trust me – it’s an extra step.)

(You might also be wondering: What about crochet? Crochet was even more recent. 1800s.)

Part 3: The Ancient Peruvians couldn’t knit either, but they did something that looks the same

You sometimes see people say that knitting is much older, maybe thousands of years old. It’s hard to tell how old knitting is – fabric doesn’t always preserve well – but it’s safe to say that it’s not that old. We have examples of people doing things with string for thousands of years, but no examples of knitting before those 1000 CE socks. What we do have examples of is naalbinding, a method of making fabric from yarn using a needle. Naalbinding produces a less-stretchy fabric than knitting. It’s found from Scandinavia to the Middle East and also shows up in Peru.

The native Peruvian form of naalbinding is a specific technique called cross-knit looping. (This technique also shows up sometimes in pre-American Eurasia, but it’s not common.) The interesting thing about cross-knit looping is that the fabric looks almost identical to regular knitting.

Here’s a tiny cross-knit-looped bag I made, next to a tiny regularly knit bag I made. You can see they look really similar. The fabric isn’t truly identical if you look closely (although it’s close enough to have fooled historians). It doesn’t act the same either – naalbound fabric is less stretchy than knit fabric, and it doesn’t unravel.

The ancient Peruvians cross-knit-looped decorations for other garments and the occasional hat, not socks.

Cross-knit-looped detail from the absolutely stunning Paracas Textile. If you look closely, it looks like stockinette knit fabric, but it’s not.

Inspired by the Paracas Textile figures above, I used cross-knit-looping to make this little fox lady fingerpuppet:

I think it was easier to do fine details than it would be if I were knitting – it felt more like embroidery – but it might have been slower to make the plain fabric parts than knitting would have been. But I’ve done a lot of knitting and very little cross-knit-looping, so it’s hard to compare directly. If you want to learn how to do cross-knit looping yourself, Donna Kallner on Youtube has handy instructional videos.

I wondered about naalbinding in general – does the practice predate human dispersal to the Americas, or did the Eurasian technique and the American technique evolve separately? Well, I don’t know for certain. Sewing needles and working with yarn are old old practices, definitely pre-dating the hike across Beringia (~18,000 BCE). The oldest naalbinding is 6500 years old, so it’s possible – but as far as I know, no ancient naalbinding has every been found anywhere in the Americas outside of Peru, or in eastern Russia or Asia – it was mostly the Middle East and Europe, and then, also, separately, Peru. The process of cross-knit looping shares some similarities with net-making and basket-weaving, so it doesn’t seem so odd to me that the process was invented again in Peru.

For a while, I thought, it’s even weirder that the Peruvians didn’t get to knitting – they were so close, they made something that looks so similar. But cross-knit-looping doesn’t actually particularly share any other similarities with knitting more than naalbinding or even more common crafts like basketweaving or weaving – the tools are different, the process is different, etc.

So the question should be the same for the Romans or any other other culture with yarn and sticks before 1000 AD: why didn’t they invent knitting? They had all the pieces. …Didn’t they?

Yeah, I think they did.

Part 4: Many stones can form an arch, singly none

Let’s jump topics for a second. In Egypt, a millenium before there were knit socks, there was the Library of Alexandria. Zenodotus, the first known head librarian at the Library of Alexandria, organized lists of words and probably the library’s books by alphabetical order. He’s the first person we know of to alphabetize books with this method, somewhere around 300 BCE.

Then, it takes 500 years before we see alphabetization of books by the second letter.3

The first time I heard this, I thought: Holy mackerel. That’s a long time. I know people who are very smart, but I’m not sure I know anyone smart enough to invent categorizing things by the second letter.

But. Is that true? Let’s do some Fermi estimates. The world population was 1.66E8 (166 million) in 500 BCE and 2.02E8 (202 million) in 200 CE. But only a tiny fraction would have had access to books, and only a fraction of those in the western alphabet system. (And of course, people outside of the Library of Alexandria with access to books could have done it and we just wouldn’t know, because that fact would have been lost – but people have actually studied the history of alphabetization and do seem to treat this as the start of alphabetization as a cultural practice, so I’ll carry on.)

For this rough estimate, I’ll average the world population over that period to 2E8. Assuming a 50 year lifespan, that’s 10 lifespans and thus 2E10 people living in the window. If only one in a thousand people would have been in a place to have the idea and have it recognized (e.g. access to lots of books), that’s 1 in 2E7 people, or 1 in 20 million. That’s suddenly not unreachable. Especially since I think “1 in 1,000 ‘being able to have the idea’” might be too high – and if it’s more like “1 in 10,000” or lower, the end number could be more like 1 in 1 million. I might actually know people who are 1 in 1 million smart – I have smart friends. So there’s some chance I know someone smart enough to have invented “organizing by the second letter of the alphabet”.

Sidenote: Ancient bacteria couldn’t knit

A parallel in biology: Some organisms emit alcohol as a waste product. For thousands of years, humans have been concentrating alcohol in one place to kill bacteria. (… Okay, not just to kill bacteria.) From 2005 to 2015, some bacteria have been getting 10x resistant to alcohol.

Isn’t it strange that this is only happened in the last 10 years? This question actually lead, via a winding path, to the idea that became my Funnel of Human Experience blog post. I forgot to answer the question, but suffice to say that if alcohol production is in some way correlated&&& with the human population, 10 years is more significant but still not very much.

And yet, alcohol resistance seems to have involved in Enterococcus faecium only recently. The authors postulate the spread of alcohol handwashing. Seems as plausible as anything. Or maybe it’s just difficult to evolve.

Knitting continues to interest me, because a lot of examples of innovation do rely heavily on what came before. To have invented organizing books by the second letter of the alphabet, you have to have invented organizing books by the first letter of the alphabet, and also know how to write, and have access to a lot of books for the second letter to even matter.

The sewing machine was invented in 1790 CE and improved drastically over the next 60 years, where it became widely used to automate a time-consuming and extremely common task. We could ask: “But why wasn’t the sewing machine invented earlier, like in 1500 CE?”

But we mostly don’t, because to invent a sewing machine, you also need very finely machined gears and other metal parts, and that technology also came up around the industrial revolution. You just couldn’t have made a reliable sewing machine in 1500 CE, even if you had the idea – you didn’t have all the steps. In software terms, as a technology, sewing machines have dependencies. Thus, the march of human progress, yada yada yada.

But as far as I can tell, you had everything that went into knitting for thousands of years beforehand. You had sticks, you had yarn, you had the motivation. Knitting doesn’t have dependencies after that. And you had brainpower: people in the past everywhere were making fiber into yarn and yarn into clothing all of the time, seriously making clothes from scratch takes so much time.

And yet, knitting is very recent. That was so big of a leap that it took thousands of years for someone to figure it out.


UPDATE: see the follow-up to this post with more findings from the earliest days of knitting, crochet, sprang, etc.


1 I’m not displaying the meme itself in this otherwise image-happy post because if I do, one of my friends will read this essay and get to the meme but stop reading before they get to the part where I say the meme is incorrect. And then the next time we talk, they’ll tell me that they read my blog post and liked that part where a Youtuber proved that this mysterious Roman artifact was used to knit gloves, and hah, those silly historians! And then I will immediately get a headache.

2 Flexible circular knitting needles for knitting tubes are, as you might guess, also a more modern invention. If you’re in the Medieval period, it’s four sticks or bust.

3 My girlfriend and I made a valiant attempt to verify this, including squinting at some scans of fragments from Ancient Greek dictionaries written on papyrus from Papyri.info – which is, by the way, easily one of the most websites of all time. We didn’t make much headway.

The dictionaries or bibliographies we found on papyrus seem to be ordered completely alphabetically, but even those “source texts” were copies from ~1500 CE or that kind of thing, of much older (~200 CE) texts. So those texts we found might have been alphabetized by the copiers. Also, neither of us know Ancient Greek, which did not help matters.

Ultimately, this citation about both primary and secondary alphabetization seems to come from Lloyd W. Daly’s well-regarded 1967 book Contributions to a history of alphabetization in Antiquity and the Middle Ages, which I have not read. If you try digging further, good luck and let me know what you find.

[Crossposted to LessWrong.]

The funnel of human experience

[EDIT: Previous version of this post had some errors. Thanks for jeff8765 for pinpointing the error and esrogs in the comments for bringing it to my attention as well. This has been fixed. Also, I wrote FHI when I meant FLI.]

The graph of the human population over time is also a map of human experience. Think of each year as being “amount of human lived experience that happened this year.” On the left, we see the approximate dawn of the modern human species in 50,000 BC. On the right, the population exploding in the present day.

2018_09_19_21:53:07_Selection

It turns out that if you add up all these years, 50% of human experience has happened after 1309 AD. 15% of all experience has been experienced by people who are alive right now.

I call this “the funnel of human experience” – the fact that because of a tiny initial population blossoming out into a huge modern population, more of human experience has happened recently than time would suggest.

50,000 years is a long time, but 8,000,000,000 people is a lot of people.

20181009_155712_Film3

Early human experience: casts of the skulls of the earliest modern humans found in various  continents. Display at the Smithsonian Museum of National History.

 


If you want to expand on this, you can start doing some Fermi estimates. We as a species have spent…

  • 1,650,000,000,000 total “human experience years”
    • See my dataset linked at the bottom of this post.
  • 7,450,000,000 human years spent having sex
    • Humans spend 0.45% of our lives having sex. 0.45% * [total human experience years] = 7E9 years
  • 52,000,000,000 years spent drinking coffee
    • 500 billion cups of coffee drunk this year x 15 minutes to drink each cup x 100 years* = 5E10 years
      • *Coffee consumption has likely been much higher recently than historically, but it does have a long history. I’m estimating about a hundred years of current consumption for total global consumption ever.
  • 1,000,000,000 years spent in labor
    • 110,000,000,000 billion humans ever x ½ women x 12 pregnancies* x 15 hours apiece = 1.1E9 years
      • *Infant mortality, yo. H/t Ellie and Shaw for this estimate.
  • 417,000,000 years spent worshipping the Greek gods
    • 1000 years* x 10,000,000 people** x 365 days a year x 1 hour a day*** = 4E8 years

      • *Some googling suggested that people worshipped the Greek/Roman Gods in some capacity from roughly 500 BC to 500 AD.
      • **There were about 10 million people in Ancient Greece. This probably tapered a lot to the beginning and end of that period, but on the other hand worship must have been more widespread than just Greece, and there have been pagans and Hellenists worshiping since then.
      • ***Worshiping generally took about an hour a day on average, figuring in priests and festivals? Sure.
  • 30,000,000 years spent watching Netflix
    • 14,000,000 hours/day* x 365 days x 5 years** = 2.92E7 years
      • * Netflix users watched an average of 14 million hours of content a day in 2017.
      • **Netflix the company has been around for 10 years, but has gotten bigger recently.
  • 50,000 years spent drinking coffee in Waffle House

So humanity in aggregate has spent about ten times as long worshiping the Greek gods as we’ve spent watching Netflix.

We’ve spent another ten times as long having sex as we’ve spent worshipping the Greek gods.

And we’ve spent ten times as long drinking coffee as we’ve spent having sex.


I’m not sure what this implies. Here are a few things I gathered from this:

1) I used to be annoyed at my high school world history classes for spending so much time on medieval history and after, when there was, you know, all of history before that too. Obviously there are other reasons for this – Eurocentrism, the fact that more recent events have clearer ramifications today – but to some degree this is in fact accurately reflecting how much history there is.

On the other hand, I spent a bunch of time in school learning about the Greek Gods, a tiny chunk of time learning about labor, and virtually no time learning about coffee. This is another disappointing trend in the way history is approached and taught, focusing on a series of major events rather than the day-to-day life of people.

2) The Funnel gets more stark the closer you move to the present day. Look at science. FLI reports that 90% of PhDs that have ever lived are alive right now. That means most of all scientific thought is happening in parallel rather than sequentially.

3) You can’t use the Funnel to reason about everything. For instance, you can’t use it to reason about extended evolutionary processes. Evolution is necessarily cumulative. It works on the unit of generations, not individuals. (You can make some inferences about evolution – for instance, the likelihood of any particular mutation occurring increases when there are more individuals to mutate – but evolution still has the same number of generations to work with, no matter how large each generation is.)

4) This made me think about the phrase “living memory”. The world’s oldest living person is Kane Tanaka, who was born in 1903. 28% of the entirety of human experience has happened since her birth. As mentioned above, 15% has been directly experienced by living people. We have writing and communication and memory, so we have a flawed channel by which to inherit information, and experiences in a sense. But humans as a species can only directly remember as far back as 1903.


Here’s my dataset. The population data comes from the Population Review Bureau and their report on how many humans ever lived, and from Our World In Data. Let me know if you get anything from this.

Fun fact: The average living human is 30.4 years old.

Wait But Why’s explanation of the real revolution of artificial intelligence is relevant and worth reading. See also Luke Muehlhauser’s conclusions on the Industrial Revolution: Part One and Part Two.


Crossposted to LessWrong.

Why was smallpox so deadly in the Americas?

In Eurasia, smallpox was undoubtedly a killer. It came and went in waves for ages, changing the course of empires and countries. 30% of those infected with the disease died from it. This is astonishingly high mortality from a disease – worse than botulism, Lassa Fever, tularemia, the Spanish flu, Legionnaire’s disease, and SARS.

In the Americas, smallpox was a rampaging monster.

When it first appeared Hispaniola in 1518, it spread 150 miles in four months and killed 30-50% of people. Not just of those infected, of the entire population1. It’s said to have infected a quarter of the population of the Aztec Empire within two weeks, killing half of those2, and laying the stage for another disease to kill many more3. 

Then, alongside other diseases and warfare, it contributed to 84% of the Incan Empire dying4.

Among the people who sometimes traded at the Hudson Bay Company’s Cumberland House on the Seskatchewan River in 1781 and 1782, 95% seemed to have died. Of them, the U’Basquiau (also called, I believe, the Basquia Cree people) were entirely killed5.

Over time, smallpox killed 90% of the Mandan tribe, along with 80% of people in the Columbia River region, 67% of the Omahas, and half of the Piegan tribe and of the Huron and Iroquois Confederations6.

Here are some estimates of the death rates between ~1605 and 1650 in various Northeastern American groups. This was during a time of severe smallpox epidemics. Particularly astonishing figures are highlighted (mine).

highlightedtable

Figure adapted from European contact and Indian depopulation in the Northeast: The timing of the first epidemics[^7]

Most of our truly deadly diseases don’t move quickly or aren’t contagious. Rabies, prion diseases, and primary amoebic meningoencephalitis have more or less 100% fatality rates. So do trypanosomiasis (African sleeping sickness) and HIV, when untreated.

When we look at the impact of smallpox in the Americas, we see extremely fast death rates that are worse than the worst forms of Ebola.

What happened?

In short, probably a total lack of previous exposure to smallpox and the other pathogenic European diseases, combined with cultural responses that helped the pathogen spread. The fact that smallpox was intentionally spread by Europeans in some cases probably contributed, but I’m not sure how much.

Virgin soil

Smallpox and its relatives in the orthopox family – monkeypox, cowpox, horsepox, and alastrim (smallpox’s milder variant) – had been established in Eurasia and Africa for centuries. Exposure to one would give some immune protection to the others. Variolation, a cruder version of vaccination, was also sometimes practiced.

Between these, and the frequent waves of outbreaks, a European adult would have survived some kind of direct exposure to smallpox-like antigens in the past, and would have the protection of antibodies to it, preventing future sickness. They would also have had, as children, the indirect protection of maternal antibodies, protecting them as children1.

In the Americas, everyone was exposed to the most virulent form of the disease with no defenses. This is called a “virgin soil epidemic”.

In this case, epidemics would stampede through occasionally, ferociously but infrequently enough for any given tribe that antibodies wouldn’t successfully form, and maternal protection didn’t develop. Many groups were devastated repeatedly by smallpox outbreaks over decades, as well as other European diseases: the Cocolizti epidemics3, measles, influenza, typhoid fever, and others7.

In virgin soil epidemics, including these ones, disease strikes all ages: children and babies, the elderly and strong young adults6. This sort of indiscriminate attack on all age groups is a known sign in animal populations that a disease is extremely lethal8. In humans, it also slows the gears of society to a halt.

When so much of the population of a village was too sick to move, not only was there nobody to tend crops or hunt – setting the stage for scarcity and starvation – but there was nobody to fetch water. Dehydration is suspected as a major cause of death, especially in children16. Very sick mothers would also be unable to nurse infants6

Other factors that probably contributed:

Cultural factors

Native Americans had some concept of disease transmission – some people would run away when smallpox arrived in their village, possibly carrying and spreading the germ7. They also would steer clear of other tribes that had it. That said, many people lived in communal or large family dwellings, and didn’t quarantine the sick to private areas. They continued to sleep alongside and spend time with contagious people6.

In addition, pre-colonization Native American measures against diseases were probably somewhat effective to pre-colonization diseases, but tended to be ineffective or harmful for European diseases. Sweat baths, for instance, could have spread the disease and wouldn’t have helped9. Transmission could also have occurred during funerals10

Looking at combinations of the above factors, death rates of 70% and up are not entirely unsurprising.

Use as a bioweapon

Colonizers repeatedly used smallpox as an early form of biowarfare against Native Americans, knowing that they were more susceptible. This included, at times, intentionally withholding vaccines from them. Smallpox also spreads rapidly naturally, so I’m not sure how much contributed to the overall extreme death toll, although it certainly resulted in tremendous loss of life.

Probably not responsible:

Genetics. A lack of immunological diversity, or some other genetic susceptibility, has been cited as a possible reason for the extreme mortality rate. This might be particularly expected in South America, because of the serial founder effect – in which a small number of people move away from their home community and start their own, repeated over and over again, all the way across Beringia and down North America, into South America9.

That said, this theory is considered unlikely today1. For one, the immune systems of native peoples of the Americas react similarly to vaccines as the immune systems of Europeans10. For another, groups in the Americas also had unusually high mortality from other European diseases (influenza, measles, etc), but this mortality decreased relatively quickly after first exposure – quickly enough that genetic attributes couldn’t change quickly enough to explain the response10.

Some have also proposed general malnutrition, which would weaken the immune system and make it harder to fight off smallpox. This doesn’t seem to have been a factor1. Scarce food was a fact of life in many Native American groups, but then again, the same was true for European peasants, who still didn’t suffer as much from smallpox.

Africa

Smallpox has had a long history in parts of Africa – the earliest known instance of smallpox infection comes from Egyptian mummies2, and frequent European contact throughout the centuries spread the disease to the parts they interacted with. Various groups in North, East, and West Africa developed their own variolation techniques11.

However, when the disease was introduced to areas it hadn’t existed before, we saw similarly astounding death rates as in the Americas: one source describes mortality rates of 80% among the Griqua people of South Africa. Less quantitatively, it describes how several Hottentot tribes were “wiped out” by the disease, that some tribes in northern Kenya were “almost exterminated”, and that parts of the eastern Congo River basin became “completely depopulated”2.

This makes it sound like smallpox acted similarly in unexposed people in Africa. It also lends another piece of evidence against the genetic predisposition hypothesis – that the disease would act similarly on groups so geographically removed.

Wikipedia also tells me that smallpox was comparably deadly when it was first introduced to various Australasian islands, but I haven’t looked into this further.

Extra

Required reading on humanism, smallpox, and smallpox eradication.


When smallpox arrived in India around 400 AD, it spurred the creation of Shitala, the Hindu goddess of (both causing and curing) smallpox. She is normally depicted on a donkey, carrying a broom for either spreading germs or sweeping out a house, and a bowl of either smallpox germs or of cool water.

The last set of images on this page also seems to be a depiction of the goddess, and captures something altogether different, something more dark and visceral.


Finally, this blog has a Patreon. If you like what you’ve read, consider giving it your support so I can make more of it.

References


  1. Riley, J. C. (2010). Smallpox and American Indians revisited. Journal of the history of medicine and allied sciences65(4), 445-477. 
  2. Fenner, F., Henderson, D. A., Arita, I., Jezek, Z., Ladnyi, I. D., & World Health Organization. (1988). Smallpox and its eradication. 
  3. Acuna-Soto, R., Sthale, D. W., Cleaveland, M. K., & Therrell, M. D. (2002). Megadrought and megadeath in 16th century Mexico. Revista Biomédica13, 289-292. 
  4. Beer, M., & Eisenstat, R. A. (2000). The silent killers of strategy implementation and learning. Sloan management review41(4), 29. 
  5. Houston, C. S., & Houston, S. (2000). The first smallpox epidemic on the Canadian Plains: in the fur-traders’ words. Canadian Journal of Infectious Diseases and Medical Microbiology11(2), 112-115. 
  6. Crosby, A. W. (1976). Virgin soil epidemics as a factor in the aboriginal depopulation in America. The William and Mary Quarterly: A Magazine of Early American History, 289-299. 
  7. Sundstrom, L. (1997). Smallpox Used Them Up: References to Epidemic Disease in Northern Plains Winter Counts, 1714-1920. Ethnohistory, 305-343. 
  8. MacPhee, R. D., & Greenwood, A. D. (2013). Infectious disease, endangerment, and extinction. International journal of evolutionary biology, 2013. 
  9. Snow, D. R., & Lanphear, K. M. (1988). European contact and Indian depopulation in the Northeast: the timing of the first epidemics. Ethnohistory, 15-33. 
  10. Walker, R. S., Sattenspiel, L., & Hill, K. R. (2015). Mortality from contact-related epidemics among indigenous populations in Greater Amazonia. Scientific reports5, 14032. 
  11. Herbert, E. W. (1975). Smallpox inoculation in Africa. The Journal of African History16(4), 539-559.