I’ve been reading a lot of trip reports lately. Trip reports are accounts people write about their experiences doing drugs, for the benefit of other people who might do those same drugs. I don’t take illegal drugs myself, but I like learning about other people’s intense experiences, and trip reports are little peeks into the extremes of human consciousness.
In some of these, people are really trying to communicate the power and revelation they had on a trip. They’re trying to share what might be the most meaningful experience of their entire life.
Here’s another thing: almost all trip reports are kind of mediocre writing.
This is wildly judgmental but I stand by it. Here are some common things you see in them:
Focusing on details specific to the situation that don’t matter to the reader. (Lengthy accounting of logistics, who the person was with at what time even when they’re not mentioned again, etc.)
Sort of basic descriptions of phenomena and emotions: “I was very scared”. “I couldn’t stop thinking about it.”
Cliches: “I was glad to be alive.” “It felt like I was in hell.” “It was an epic struggle.”
Insights described in sort of classically-high-sounding abstractions. “I realized that the universe is made of love.” “Everything was nothing and time didn’t exist.” These statements are not explained, even if they clearly still mean a lot to the writer, and do not really communicate the force of whatever was going on there.
It’s not, like, a crime to write a mediocre trip report. It’s not necessarily even a problem. They’re not necessarily trying to convince you of anything. A lot of them are just what it says on the tin: recording some stuff that happened. I can’t criticize these for being bland, because that seems like trying to critique a cookbook for being insufficiently whimsical: they’re just sharing information.
(…Though you can still take that as a personal challenge; “is this the best prose it can be?” For instance, How to Cook and Eat in Chinese by Chao Yang Buwei is a really well-written cookbook with a whimsical-yet-practical style. There’s always room to grow.)
But some of these trip reports very much do have an agenda, like “communicating crucial insights received from machine elves” or “convincing you not to take drug X because it will ruin your life”. In these cases, the goal would be better served if the writing were good, and boy howdy, my friends: the writing is not good.
Which is a little counter-intuitive, right? You’d think these intense and mind-blowing experiences would automatically give you rich psychic grist for sharing with others, but it turns out, no, accounts of the sublime and life-altering can still be astonishingly mid.
Now certain readers may be thinking, not unreasonably, “that’s because drug-induced revelations aren’t real revelations. The drug’s effects makes some thoughts feel important – a trip report can’t explain why a particular ‘realization’ is important, because there’s nothing behind it.”
But you know who has something new and important to say AND knows why it’s important? Academic researchers publishing their latest work.
But alas, academic writing is also, too frequently, not good.
And if good ideas made for good writing, you’d expect scientific literature to be the prime case for it. Academic scientists are experts: they know why they made all the decisions they did, they know what the steps do, they know why their findings are important. But that’s also not enough.
Ignore academic publishing and the scientific process itself, let’s just look at the writing. It’s very dense, denser than it needs to be. It does not start with simple ideas and build up, it’s practically designed to tax the reader. It’s just boring, it’s not pleasant to read. The rationale behind specific methods or statistical tests aren’t explained. (See The Journal of Actually Well-Written Science by Etienne Fortier-Dubois for more critique of the standard scientific style.) There’s a whole career field of explaining academic studies to laypeople, which is also, famously, often misleading and bad.
This is true for a few reasons:
First, there’s a floor of how “approachable” or “easy” you can make technical topics. A lot of jargon serves useful purposes, and what’s the point in a field of expertise if you can’t assume your reader is caught up on at least the basics? A description of synthesizing alkylated estradiol derivatives, or a study on the genome replication method of a particular virus, is simply very difficult to make layperson-accessible.
Second, academic publishing and the scientific edifice as it currently stands encourage uniformity of many aspects of research output, including style and structure. Some places like Seeds of Science are pushing back on this, but they’re in the minority.
But third, and this is what trips up the trip-reporters and the scientists alike, writing well is hard. Explaining complicated or abstract or powerful ideas is really difficult. Just having the insight isn’t enough – you have to communicate it well, and that is its own, separate skill.
I don’t really believe in esoterica or the innately unexplainable. “One day,” wrote Jack Kerouac, “I will find the right words, and they will be simple.” Better communication is possible. There are great descriptions of being zonked out of one’s gourd and there is great, informative, readable science writing.
So here’s my suggestion: Learn to write well before you have something you really need to tell people about. Practice it on its own. Write early and often. Write a variety of different things and borrow techniques from writing you like. And once you have a message you actually need to share, you’ll actually be able to express it.
(A more thorough discussion of how to actually write well is beyond the scope of this blog post – my point here is just that it’s worth improving. if you’re interested, let me know and I might do a follow-up.)
Thank you Kelardry for reviewing a draft of this post.
Today’s post isn’t so much an essay as a recommendation for two bodies of work on the same topic: Tom Mahood’s blog posts and Adam “KarmaFrog1” Marsland’s videos on the 2010 disappearance of Bill Ewasko, who went for a day hike in Joshua Tree National Park and dropped out of contact.
(I won’t be fully recounting every aspect of the story. But I’ll give you the pitch and go into some aspects I found interesting. Literally everything interesting here is just recounting their work, go check em out.)
Most ways people die in the wilderness are tragic, accidental, and kind of similar. A person in a remote area gets injured or lost, becomes the other one too, and dies of exposure, a clumsy accident, etc. Most people who die in the wilderness have done something stupid to wind up there. Fewer people die who have NOT done anything glaringly stupid, but it still happens, the same way. Ewasko’s case appears to have been one of these. He was a fit 66-year-old who went for a day hike and never made it back. His story is not particularly unprecedented.
This is also not a triumphant story. Bill Ewasko is dead. Most of these searches were made and reports written months and years after his disappearance. We now know he was alive when Search and Rescue started, but by months out, nobody involved expected to find him alive.
Ewasko was not found alive. In 2022, other hikers finally stumbled onto his remains in a remote area in Joshua Tree National Park; this was, largely, expected to happen eventually.
I recommend these particular stories, when we already know the ending, because they’re stunningly in-depth and well-written fact-driven investigations from two smart technical experts trying to get to the bottom of a very difficult problem. Because of the way things shook out, we get to see this investigation and changes in theories at multiple points: Tom Mahood has been trying to locate Ewasko for years and written various reports after search and search, finding and receiving new evidence, changing his mind, as has Adam, and then we get the main missing piece: finding the body. Adam visits the site and tries to put the pieces together after that.
Mahood and Adam are trying to do something very difficult in a very level-headed fashion. It is tragic but also a case study in inquiry and approaching a question rationally.
(They’re not, like, Rationalist rationalists. One of Mahood’s logs makes note of visiting a couple of coordinates suggested by remote viewers, AKA psychics. But the human mind is vast and full of nuance, and so was the search area, and on literally every other count, I’d love to see you do better.)
Unknowns and the missing persons case
Like I said, nothing mind-boggling happened to Ewasko. But to be clear, by wilderness Search and Rescue standards, Ewasko’s case is interesting for a couple reasons:
First, Ewasko was not expected to be found very far away. He was a 65-year-old on a day hike. But despite an early and continuous search, the body was not found for over a decade.
Second, two days after he failed to make a home-safe call to his partner and was reported missing, a cell tower reported one ping from his cell phone. It wasn’t enough to triangulate his location, but the ping suggested that the phone was on in a radius of approximately 10.6 miles around a specific cell tower. The nearest point of that radius was, however, miles in the opposite direction from the nearest likely trail destination to Ewasko’s car – from where Ewasko ought to be.
If you’ve spent much time in wilderness areas in the US, you know that cell coverage is findable but spotty. You’ll often get reception on hills but not in valleys, or suchlike. There’s a margin for error on cell tower pings that depends on location. Also, in this case, Verizon (Ewasko’s carrier) had decent coverage in the area – so it’s kind of surprising, and possibly constrains his route, that his cell phone only would have pinged once.
All of this is very Bayesian: Ewasko’s cellphone was probably turned off for parts of his movement to save battery (especially before he realized he was in danger), maybe there was data that the cell carrier missed, etc, etc. But maybe it suggests certain directions of travel over others. And of course, to have that one signal that did go out, he has to have gotten to somewhere within that radius – again, probably.
How do you look for someone in the wilderness?
Search and rescue – especially if you are looking for something that is no longer actively trying to be found, like a corpse – is very, very arduous. In some ways, Joshua Tree National Park is a pretty convenient location to do search and rescue: there aren’t a lot of trees, the terrain is not insanely steep, you don’t have to deal with river or stream crossings, clues will not be swept away by rain or snow.
I haven’t been to Joshua Tree myself, but going from Adam’s videos, this is representative of the kind of terrain. || Photo in Joshua Tree National Park by Shane Burkhardt, under a CC BY-NC 2.0 license.
There are rocks, low obstacles, different kinds of terrain, hills and lines of sight, and enough shrubbery to hide a body.
A lot of the terrain looks very similar to other parts of the terrain. Also dotted about are washes made of long stretches of smooth sand, so the landscape is littered with features that look exactly like trails.
Also, environmentally, it’s hot and dry as hell, like “landscape will passively kill you”, and there are rattlesnakes and mountain lions.
When a search and rescue effort starts, they start by outlining the kind of area in which they think the person might plausibly be in. Natural features like cliffs can constrain the trails, as can things like roads, on the grounds that if a lost person found a road, they’d wait by the road.
You also consider how long it’s been and how much water they have. Bill Ewasko was thought to have three bottles of water on him – under harsh and dry circumstances, that water becomes a leash, you can only go so far with what you have. A person on foot in the desert is limited in both time and distance by the amount of water they carry; once that water runs out, their body will drop in the area those parameters conscribe.
Starting from the closest, most likely places and moving out, searchers first hit up the trails and other clear points of interest. But once they leave the trail? Well, when they can, maybe they go out in an area-covering pattern, like this:
Map by Tom Mahood of one of his search expeditions, posted here. The single-dashed line is the cellphone ping radius.
But in practice, that’s not always tenable. Maybe you can really plainly see from one part to another and visually verify there’s nothing there. Maybe this wouldn’t get you enough coverage, if there are obstacles in the way. There are mountains and cliff faces and rocky slopes to contend with.
Also, it’s pretty hard to cover “all the trails”, since they connect to each other, and someone is really more likely to be near a trail than far away from a trail. Or you might have an idea about how they would have traveled – so do you do more covering-terrain searching, or do you check farther-out trails? In this process, searchers end up making a lot of judgment calls about what to prioritize, way more than you might expect.
You end up taking snaky routes like this:
Map by Tom Mahood, posted here. This is a zoom-in of a pretty small area. Blue was the ground covered in this single expedition, green and red are older search trails, and the long dashed line is the cellphone ping radius.
The initial, official Search and Rescue was called off after about a week, so the efforts Mahood records – most of which he is doing himself, or with some buddies – constitute basically every search that happened. He posts GPS maps too, of that day’s travels overlaid on past travels. You see him work outward, covering hundreds of miles, filling in the blank spots on the map.
Mahood is really good at both being methodical and explaining his reasoning for each expedition he makes, and where he thinks to look. It’s an absolutely fascinating read.
The purple dot is my addition. This is where Ewasko’s body was found in 2022. Mahood wrote this about the same trip where (as far as I can tell) he came the closest any searcher ever got to finding Ewasko. Despite saying it was the end game, Mahood and associates mounted about 50 more trips. Hindsight is heartbreaking.
Making hindsight useful
Hindsight haunts this story in 2024. It’s hard to learn about something like this and not ask “what could have stopped this from happening?”
I found myself thinking, sort of automatically, “no, Ewasko, turn around here, if you turn around here you can still salvage this,” like I was planning some kind of cross-temporal divine intervention. That line of thinking is, clearly, not especially useful.
Maybe the helpful version of this question, or one of them, is: If I were Ewasko, knowing what Ewasko knew, what kind of heuristics should I have used that would have changed the outcome?
The answer is obviously limited by the fact that we don’t know what Ewasko did. There are some specifics, like that he didn’t tell his contacts very specific hiking plans. But he was also planning on a day hike at an established trailhead in a national park an hour outside of Palm Springs. Once he was up the trail, you’ll have to watch Adam’s video and draw your own conclusions (if Adam is even right.)
Mahood writes: “People seldom act randomly, they do what makes sense to them at the time at the specific location they are at.”
And Adam says: “Most man-made disasters don’t spring from one bad decision but from a series of small, understandable mistakes that build on one another.”
Another question is: If I were the searchers, knowing what the searchers know, what could I have done differently that would have found the body faster?
Knowing how far away the body was found and the kind of terrain covered, I’m still out on this one.
How deep the search got
Moving parts include:
Concrete details about Ewasko (Ewasko’s level of fitness, his supplies, down to the particular maps he had, what his activities were earlier in the day)
Ewasko’s broader mindset (where he wanted to go at the outset, which tools he used to navigate trails, how much HE knew about the area)
Ewasko’s moment-to-moment experience (if he were at a particular location and wanted to hurry home, which route would he take? What if he were tired and low on water and recognized he was in an emergency? What plans might he make?) (This ties into the field of Search and Rescue psychology – people disoriented in the wilderness sometimes make predictable decisions.)
Physical terrain (which trails exist and where? How hard is it to get from places to place? What obstacles are there)
Weather (how much moonlight was there? How hard was travelling by night? How bad was the daytime heat?)
Electromagnetic terrain (where in the park has cell service?)
Electromagnetic interpretation (How reliable is one reported cell phone ping? If it is inaccurate, in which ways might it be inaccurate?)
Other people’s reports (the very early search was delayed because a ranger apparently just repeatedly didn’t see or failed to notice Ewasko’s car at a trailhead, and there were conflicting reports about which way it was parked. According to Adam and I think Mahood, it now seems now like the car was probably there the entire time it should have been, and it was probably just missed due to… regular human error. But if this is one of the few pieces of evidence you have, and it looks odd – of course it seems very significant.)
The search evolving over time (where has been looked in what ways before? And especially as the years pass on – some parts of the terrain are now extremely well-searched, not to mention are regularly used by regular hikers. What are the changes one of these searches missed somewhere, vs. that Ewasko is in a completely new part of the territory?)
I imagine that it would be really hard to choose to carry on with something like this. In this investigation, there was really no new concrete evidence between 2010 and 2022. As Mahood goes on, in each investigation, he adds the tracks to his map. Territory fills in – big swathes of trails, each of them. New models emerge, but by and large the only changing detail is just that you’ve checked some places now, and he’s somewhere you haven’t checked. Probably.
A hostile information environment
Another detail that just makes the work more impressive: Mahood is doing all these investigations mostly on his own, without help and with (as he sees it, although it’s my phrasing) dismissal and limited help from Joshua Tree National Park officials. The reason Mahood posted all of this on the internet was, as he describes it, throwing up his hands and trying to crowd-source it, asking for ideas.
Then after that – The internet has a lot of interested helpful people – I first ran into Mahood’s blog months ago via r/RBI (“Reddit Bureau of Investigation”) or /r/UnsolvedMysteries or one of those years ago. I love OSINT, I think Mahood doing what he did was very cool. But also on those sites and also in other places there are also a lot of out-there wackos. (I know, wackos on the internet. Imagine.) In fact there’s a whole conspiracy theory community called Missing 411 about unexplained disappearances in national parks, which attributes them vaguely to sinister and/or supernatural sources. I think that’s all probably full of shit, though I haven’t tried to analyze it.
Anyway, this case attracted a lot of attention among those types. Like: What if Bill Ewasko didn’t want to be found? What if someone wanted to kill him? What if the cellphone ping was left by as an intentional red herring? You run into words like “staged” or “enforced disappearance” or “something spooky” in this line of thought, so say nothing of run-of-the-mill suicide.
Look, we live in a world where people get kidnapped or killed or go to remote places to kill themselves sometimes, the probability is not zero. Also – and I apologize if this sounds patronizing to searchers, I mean it sympathetically – extended fruitless efforts like this seem like they could get maddening, that alternative explanations that all your assumptions are wrong would start looking really promising. Like you’re weaving this whole dubious story about how Ewasko might have gone down the one canyon without cell reception, climbing up and down hills in baking heat while out of water and injured – or there’s this other theory, waving its hands in the corner, going yeah, OR he’s just not in the park at all, dummy!
Its apparent simplicity is seductive.
Mahood apparently never put much stock in these sort of alternate models of the situation; Adam thought it was seriously likely for a while. I think it’s fair to say that “Ewasko died hiking in the park, in a regular kind of way” was always the strongest theory, but it’s the easiest fucking thing in the world for me to say that in retrospect, right? I wasn’t out there looking.
Maps and territories
Adam presents a theory about Ewasko’s final course of travel. It’s a solid and kind of stunning explanation that relies on deep familiarity with many of the aforementioned moving factors of the situation, and I do want you to watch the video, so go watch his video. (Adam says Mahood disagrees with him about some of the specifics – Mahood at present hasn’t written more after the body was found, but he might at some point, so keep an eye out.)
I’ll just go talk a little about one aspect of the explanation: Adam suspects Ewasko got initially lost because of a discrepancy between the maps at the time and the on-the-ground trail situation. See, multiple trails run out of the trailhead Ewasko parked at and through the area he was lost in, including official park-made trails and older abandoned Jeep trails.
Example of two trails coming out of the Juniper Flats trailhead where Ewasko’s car was parked. Adam thinks Ewasko could have taken the jeep trail and not even noticed the foot trail. | Adapted from Google Satellite footage from 2024. I made this image but this exact point was first made by Adam in his video.
Adam believes that partly as a result of the 1994 Desert Protection Act, Joshua Tree National Park was trying to promote the use of their own trails, as an ecosystem conservation method. Ewasko believes that Joshua Tree issued guidance to mapmakers to not mark (or de-prioritize marking) trails like the old Jeep roads, and to prioritize marking their official trails, some of which were faint and not well-indicated with signage.
Adam thinks Ewasko left the parking lot on the Jeep road – which, to be fair, runs mostly parallel to the official trail, and rejoins to it later. But he thinks that Ewasko, when returning, realized there was another parallel trail to the south and wanted to take a different route back, causing him to look for an intersection. However, Ewasko was already on the southern trail, and the unlabeled intersection he saw was to another trail that took him deeper into the wilderness – beginning the terrible spiral.
Think of this in terms of Type I and Type II errors. It’s obvious why putting a non-existent trail on a map could be dangerous: you wouldn’t want someone going to a place where they think there is a trail, because they could get lost trying to find it. It’s less obvious why not marking a trail that does exist could be dangerous, but it may well have been in this case, because it will lead people to make other navigational errors.
Endings
The search efforts did not, per se, “work”. Ewasko’s body was not found because of the search effort, but by backpackers who went off-trail to get a better view of the sunset. His body was on a hill, about seven miles northeast of his car, very close to the cellphone ping radius. He was a mile from a road.
In Adam’s final video, on Ewasko’s coroner’s report, Adam explaining that he doesn’t think he will ever learn anything else about Ewasko’s case. Like, that he could be wrong about what he thinks happened or someone may develop a better understanding of the facts, but there will be no new facts. Or at least, he doubts there will be. There’s just nothing left likely to be found.
There are worse endings, but “we have answered some of our questions but not all of them and I think we’ve learned all we are ever going to learn” has to be one of the saddest.
Like I said, I think the searchers made an incredible, thoughtful effort. Sometimes, you have a very hard problem and you can’t solve it. And you try very hard to figure out where you’re wrong and how and what’s going on and what you do is not good enough.
These reports remind me of the wealth of material available on airplane crashes, the root cause analyses done after the fact. Mostly, when people die in maybe-stupid and sad accidents, their deaths do not get detailed investigations, they do not get incident reviews, they do not get root cause analyses.
But it’s nice that sometimes they do.
If you go out into the wilderness, bring plenty of water. Maybe bring a friend. Carry a GPS unit or even a PLB if you might go into risky territory. Carry the 10 essentials. If you get lost, think really carefully before going even deeper into the wilderness and making yourself harder to find. And tell someone where you’re going.
You thought you were done learning about knitting history? You fool. You buffoon. I wanted to double check some things in the last post and found out that the origins of knitting are even weirder than I guessed.
Humans have been wearing clothes to hide our sinful sinful bodies from each other for maybe about 20,000 years. To make clothes, you need cloth. One way to make cloth is animal skin or membrane, that is, leather. If you want to use it in any complicated or efficient way, you also need some way to sew that – very thin strips of leather, or taking sinew or plant fiber and spinning it into thread. Also popular since very early on is taking that thread, and turning it into cloth. There are a few ways to do this.
By the way, I’m going to be referring to “thread” and “yarn” interchangeably from here on out. Don’t worry about it.
(Can you just sort of smush the fiber into cloth without making it into thread? Yes. This is called felting. How well it works depends on the material properties of the fiber. A lot of traditional Pacific Island cloth was felted from tree bark.)
Now with all of these, you could probably make some kind of cloth by taking threads and, by hand, shaping them into these different structures. But that sounds exhausting and nobody did that. Let’s get tools involved. These different structures correspond to some different kind of manufacturing technique.
By far, the most popular way of making cloth is weaving. Everyone has been weaving for tens of thousands of years. It’s not quite a cultural universal but it’s damn close. To weave, you need a loom.1 There are ten million kinds of loom. Most primitive looms can make a piece of cloth that is, at most, the size of the loom. So if you want to make a tunic that’s three feet wide and four feet long, you need cloth that’s at least three feet wide and four feet long, and thus, a loom that’s at least three feet wide and four feet long. You can see how weaving was often a stationary affair.
Recap
Here’s what I said in the last post: Knitting is interesting because the manufacturing process is pretty simple, needs simple tools, and is portable. The final result is also warm and stretchy, and can be made in various shapes (not just flat sheets). And yet, it was invented fairly recently in human history.
I mostly stand by what I said in the last post. But since then I’ve found some incredible resources, particularly the scholarly blogs Loopholes by Cary “stringbed” Karp and Nalbound by Anne Marie Deckerson, which have sent me down new rabbit-holes. The Egyptian knit socks I outlined in the last post sure do seem to be the first known knit garments, like, a piece of clothing that is meant to cover your body. They’re certainly the first known ones that take advantage of knitting’s unique properties: of being stretchy, of being manufacturable in arbitrary shapes. The earliest knitting is… weirder.
SCA websites
Quick sidenote – I got into knitting because, in grad school, I decided that in the interests of well-roundedness and my ocular health, I needed hobbies that didn’t involve reading research papers. (You can see how far I got with that). So I did two things: I started playing the autoharp, and I learned how to knit. Then, I was interested in the overlap between nerds and handicrafts, so a friend in the Society for Creative Anachronism pitched me on it and took me to a coronation. I was hooked. The SCA covers “the medieval period”; usually, 1000 CE through 1600 CE.
I first got into the history of knitting because I was checking if knitting counted as a medieval period art form. I was surprised to find that the answer was “yes, but barely.” As I kept looking, a lot of the really good literature and analysis – especially experimental archaeology – came out of blogs of people who were into it as a hobby, or perhaps as a lifestyle that had turned into a job like historical reenactment. This included a lot of people in the SCA, who had gone into these depths before and just wrote down what they found and published it for someone else to find. It’s a really lovely knowledge tradition to find one’s self a part of.
Aren’t you forgetting sprang?
There’s an ancient technique that gets some of the benefits of knitting, which I didn’t get to in the last post. It’s called sprang. Mechanically, it’s kind of like braiding. Like weaving, sprang requires a loom (the size of the cloth it produces) and makes a flat sheet. Like knitting, however, it’s stretchy.
Sprang shows up in lots of places – the oldest in 1400 BCE in Denmark, but also other places in Europe, plus (before colonization!): Egypt, the Middle East, centrals Asia, India, Peru, Wisconsin, and the North American Southwest. Here’s a video where re-enactor Sally Pointer makes a sprang hairnet with iron-age materials.
Despite being widespread, it was never a common way to make cloth – everyone was already weaving. The question of the hour is: Was it used to make socks?
Well, there were probably sprang leggings. Dagmar Drinkler has made historically-inspired sprang leggings, which demonstrate that sprang colorwork creates some of the intricate designs we see painted on Greek statues – like this 480 BCE Persian archer.
Sculpture as it was originally painted. Source: Metropolitan Museum of ArtSprang clothing fit over a model of the sculpture, showing the same designs. Source: Dragmar Drinkler, “Tight-fitting Clothes in Antiquity and the Renaissance”
I haven’t found any attestations of historical sprang socks. The Sprang Lady has made some, but they’re either tube socks or have separately knitted soles.
Why weren’t there sprang socks? Why didn’t sprang, widespread as it is, take on the niche that knitting took?
I think there are two reasons. One, remember that a sock is a shaped-garment, tube-like, usually with a bend at the heel, and that like weaving, sprang makes a flat sheet. If you want another shape, you have to sew it in. It’s going to lose some stretch where it’s sewn at the seam. It’s just more steps and skills than knitting a sock.
The second reason is warmth. I’ve never done sprang myself – from what I can tell, it has more of a net-like openness upon manufacture, unlike knitting which comes with some depth to it. Even weaving can easily be made pretty dense simply by putting the threads close together. I think, overall, a sprang fabric garment made with primitive materials is going to be less warm than a knit garment made with primitive materials.
Those are my guesses. I bring it up merely to note that there was another thread → cloth technique that made stretchy things that didn’t catch on the same way knitting did. If you’re interested in sprang, I cannot recommend The Sprang Lady’s work highly enough.
Anyway, let’s get back to knitting.
Knitting looms
The whole thing about roman dodecahedrons being (hypothetically) used to knit glove fingers, described in the last post? I don’t think that was actually the intended purpose, for the reasons I described re: knitting wasn’t invented yet. But I will cop to the best argument in its favor, which is that you can knit with glove fingers with a roman dodecahedron.
“But how?” say those of you not deeply familiar with various fiber arts. “That’s not needles,” you say.
You got me there. This is a variant of a knitting loom. A knitting loom is a hoop with pegs to make knit tubes. This can be the basis of a knitting machine, but you can also knit on one on its own.. They make more consistent knit tubes with less required hand-eye coordination. (You can also make flat panels with them, especially a version called a knitting rake, but since all of the early knitting we’re talking about are tubes anyhow, let’s ignore that for the time being.)
Knitting on a modern knitting loom. || Photo from Cynthia M. Parker on flickr, under a CC BY-SA 2.0 license.
Knitting on a loom is also called spool knitting (because you can use a spool with nails in it as the loom for knitting a cord) and tomboy knitting (…okay). Structurally, I think this is also basically the same thing as lucet cord-making, so let’s go ahead and throw that in with this family of techniques. (The earliest lucets are from ~1000 CE Viking Sweden and perhaps medieval Viking Britain.)
The important thing to note is that loom knitting makes a result that is, structurally, knit. It’s difficult to tell whether a given piece is knit with a loom or needles, if you didn’t see it being made. But since it’s a different technique, different aspects become easier or harder.
A knitting loom sounds complicated but isn’t hard to make, is the thing. Once you have nails, you can make one easily by putting them in a wood ring. You could probably carve one from wood with primitive tools. Or forge one. So we have the question: Did knitting needles or knitting looms come first?
We actually have no idea. There aren’t objects that are really clearly knitting needles OR knitting looms until long after the earliest pieces of knitting. This strikes me as a little odd, since wood and especially metal should preserve better than fabric, but it’s what we’ve got. It’s probably not helped by the fact that knitting needles are basically just smooth straight sticks, and it’s hard to say that any smooth straight stick is conclusively a knitting needle (unless you find it with half a sock still on it.)
(At least one author, Isela Phelps, speculates that finger-knitting, which uses the fingers of one hand like a knitting loom and makes a chunky knit ribbon, came first – presumably because, well, it’s easier to start from no tools than to start from a specialized tool. This is possible, although the earliest knit objects are too fine and have too many stitches to have been finger-knit. The creators must have used tools.)
(stringbed also points out that a piece of whale baleen can be used as circular knitting needles, and that the relevant cultures did have access to and trade in whale parts. Although while we have no particular evidence that they were used as such, it does mean that humanity wouldn’t have to invent plastic before inventing the circular knitting needle, we could have had that since the prehistoric period. So, I don’t know, maybe it was whales.)
THE first knitting
The earliest knit objects we have… ugh. It’s not the Egyptian socks. It’s this.
There are a pair of long, thin, colorful knit tubes, about an inch wide, a few feet long. They’re pretty similar to each other. Due to the problems inherent in time passing and the flow of knowledge, we know one of them is probably from Egypt, and was carbon-dated to 425-594 CE. The other quite similar tube, of a similar age, has not been carbon dated but is definitely from Egypt. (The original source text for this second artifact is in German, so I didn’t bother trying to find it, and instead refer to stringbed’s analysis. See also matthewpius guestblogging on Loopholes.) So between the two of them, we have a strong guess that these knit tubes were manufactured in Egypt around 425-594 CE, about 500 years before socks.
People think it was used as a belt.
This is wild to me. Knitting is stretchy, and I did make fun of those peasants in 1300 CE for not having elastic waistlines, so I could see a knitted belt being more comfortable than other kinds of belts.2 But not a lot better. A narrow knit belt isn’t going to be distribute most of the force onto the body too differently than a regular non-stretchy belt, and regular non-stretchy belts were already in great supply – woven, rope, leather, etc. Someone invented a whole new means of cloth manufacture and used it to make a thing that existed slightly differently.
Then, as far as I can tell, there are no knit objects in the known historical record for five hundred years until the Egyptian socks pop up.
Pulling objects out of the past is hard. Especially things made from cloth or animal fibers, which rot (as compared to metal, pottery, rocks, bones, which last so long that in the absence of other evidence, we name ancient cultures based on them.) But every now and then, we can. We’ve found older bodies and textiles preserved in ice and bogs and swamps.3 We have evidence of weaving looms and sewing needles and pictures of people spinning or weaving cloth and descriptions of them doing it, from before and after. I’m guessing that the technology just took a very long time to diversify beyond belts.
Speaking of which: how was the belt made? As mentioned, we don’t find anything until much later that is conclusively a knitting needle or a knitting loom. The belts are also, according to matthewpius on loopholes, made with a structure called double knitting. The effect is (as indicated by Pallia – another historic reenactor blog!) kind of hard to do with knitting needles in the way they achieved it, but pretty simple to do with a knitting loom.
You think this is bad? Remember before how I said knitting was a way of manufacturing cloth, but that it was also definable as a specific structure of a thread, that could be made with different methods?
The oldest knit object in Europe might be a cup.
The Ardagh Chalice. || Photo by Sailko under a CC BY-SA 3.0 license.
You gotta flip it over.
Underside of the Ardagh Chalice. || Adapted from a Metroplitan Museum image.
Enhance.
Photo from Robert M. Organ’s 1963 article “Examination of the Ardagh Chalice-A Case History”, where they let some people take the cup apart and put it back together after.
That’s right, this decoration on the bottom of the Ardagh Chalice is knit from wire. Another example is the decoration on the side of the Derrynaflen Paten, a plate made in 700 or 800 CE in Ireland. All the examples seem to be from churches, hidden by or from Vikings. Over the next few hundred years, there are some other objects in this technique. They’re tubes knitted from silver wire. “Wait, can you knit with wire?” Yes. Stringbed points out that knitting wire with needles or a knitting loom would be tough on the valuable silver wire – they could break or distort it.
The Derrynaflen Patten, zoomed in on the knit decorations at the end. || Adapted from this photo by Johnbod, under a CC BY-SA 3.0 license.
What would make sense to do it with is a little hook, like a crochet hook. But that would only work on wire – yarn doesn’t have the structural integrity to be knit with just a hook, you need to support each of the active loops.
So was the knit structure just invented separately by Viking silversmiths, before it spread to anyone else? I think it might have been. It’s just such a long time before we see knit cloth, and we have this other plausible story for how the cloth got there.
(I wondered if there was a connection between the Viking knitting and their sources of silver. Vikings did get their silver from the Islamic world, but as far as I can tell, mostly from Iran, which is pretty far from Egypt and doesn’t have an ancient knitting history – so I can’t find any connection there.)
The Egyptian socks
Let’s go back to those first knit garments (that aren’t belts), the Egyptian knit blue-and-white socks. There are maybe a few dozen of these, now found in museums around the world. They seem to have been pulled out of Egypt (people think Kustat) by various European/American collectors. People think that they were made around 1000-1300 AD. The socks are quite similar: knit, made of cotton, in white and 1-3 shades of indigo, geometric designs sometimes including Kufic characters.
Sock at the George Washington University Textile Museum [Museum listing]Sock at the Metropolitan Museum of Art [Museum listing]Socks from the Detroit Institute of Art. [Museum listing]Sock at the George Washington University Textile Museum [Museum listing]
I can’t find a specific origin location (than “probably Egypt, maybe Kustat?”) for any of them. The possible first sock mentioned in the last post is one of these – I don’t know if there are any particular reasons for thinking that sock is older than the others.
This one doesn’t seem to be knit OR naalbound. Anne Marie Decker at Nalbound.com thinks it’s crocheted and that the date is just completely wrong. To me, at least, this cast doubts on all the other dates of similar-looking socks.
That anomalous sock scared me. What if none of them had been carbon-dated? Oh my god, they’re probably all scams and knitting was invented in 1400 and I’m wrong about everything. But I was told in a historical knitting facebook group that at least one had been dated. I found the article, and a friend from a minecraft discord helped me out with an interlibrary loan. I was able to locate the publication where Antoine de Moor, Chris Verhecken-Lammens and Mark Van Strydonck did in fact carbon-date four ancient blue-and-white knit cotton socks and found that they dated back to approximately 1100 CE – with a 95% chance that they were made somewhere between 1062 and 1149 CE. Success!
Helpful research tip: for the few times when the SCA websites fail you, try your facebook groups and your minecraft discords.
Also, here’s a knit fragment of a mitten found in Estonia. (I don’t have the expertise or the mitten to determine it myself, but Anneke Lyffland (another SCA name), a scholar who studied one is aware of cross-knit-looped naalbinding – like the Peruvian knit-lookalikes mentioned in the last post – and doesn’t believe this was naalbound.) It was part of a burial that was dated from 1238 – 1299 CE. This is fascinating and does suggest a culture of knitted practical objects, in Eastern Europe, in this time period. This is the earliest East European non-sock knit fabric garment that I’m aware of.
But as far as I know, this is just the one mitten. I don’t know much about archaeology in the area and era, and can’t speculate as to whether this is evidence that knitting was rare or whether we have very few wool textiles from the area and it’s not that surprising. (The voice of shoulder-Thomas-Bayes says: Lots of things are evidence! Okay, I can’t speculate as to whether it’s strong evidence, are you happy, Reverend Bayes?) Then again, a bunch of speculation in this post is also based on two maybe-belts, so, oh well. Take this with salt.
By the way, remember when I said crochet was super-duper modern, like invented in the 1700s?
Literally a few days ago, who but the dream team of Cary “stringbed” Karp and Anne Marie Decker published an article in Archaeological Textiles Reviewidentifying several ancient probably-Egyptian socks thought to be naalbound as being actually crocheted.
This comes down to the thing about fabric structures versus techniques. There’s a structure called slip stitch that can be either crocheted or naalbound. So since we know naalbinding is that old, so if you’re looking at an old garment and see slip stitch, maybe you say it was naalbound. But basically no fabric garment is just continuous structure all the way through. How do the edges work? How did it start and stop? Are there any pieces worked differently, like the turning of a heel or a cuff or a border? Those parts might be more clearly worked with crochet hook than a naalbinding needle. And indeed, that’s what Karp and Decker found. This might mean that those pieces are forgeries – no carbon dating. But it might mean crochet is much much older than previously thought.
My hypothesis
Knitting was invented sometime around or perhaps before 600 CE in Egypt.
From Egypt, it spreads to other Muslim regions.
It spread into Europe via one or more of these:
Ordinary cultural diffusion northwards
Islamic influence in the Iberian Peninsula
In 711 CE, Al-Andalus was conquered by the Umayyad Caliphate…
Kicking off a lot of Islamic presence in and control over the area up until 1400 CE or so…
Meanwhile, starting in 1095 CE, the Latin Church called for armies to take Jerusalem from the Byzantines, kicking off the Crusades.
…Peppering Arabic influences into Europe, particularly France, over the next couple centuries.
… Also, the Vikings were there. They separately invented the knitting structure in wire, but never got around to trying it out in cloth, perhaps because the required technique was different.
Another possibility
Wrynne, AKA Baronness Rhiall of Wystandesdon (what did I say about SCA websites?), a woman who knows a thing or two about socks, believes that based on these plus the design of other historical knit socks, the route goes something like:
I don’t know enough about socks to have an sophisticated opinion on her evidence, but the reasoning seems solid to me. For instance, as she explains, old Western European socks are knit from the cuff of the sock down, whereas old Middle Eastern and East European socks are knit from the toe of the sock up – which is also how Eastern and Northern European naalbound socks were shaped. Baronness Rhiall thinks Western Europe invented its sockmaking techniques independently based only having had a little experience with a few late 1200s/1300s knit pieces from Moorish artisans.
What about tools?
Here’s my best guess: The Egyptian tubes were made on knitting looms.
The viking tubes were invented separately, made with a metal hook as stringbed speculates, and never had any particular connection to knitting yarn.
At some point, in the Middle East, someone figured out knitting needles. The Egyptian socks and Estonian mitten and most other things were knit in the round on double-ended needles.
I don’t like this as an explanation, mostly because of how it posits 3 separate tools involved in the earliest knit structures – that seems overly complicated. But it’s what I’ve got.
Knitting in the tracks of naalbinding
I don’t know if this is anything, but here are some places we also find lots of naalbinding, beginning from well before the medieval period: Egypt. Oman. The UAE. Syria. Israel. Denmark. Norway. Sweden. Sort of the same path that we predict knitting traveled in.
I don’t know what I’m looking at here.
Maybe this isn’t real and this places just happen to preserve textiles better
Longstanding trade or migration routes between North Africa, the Middle East, and Eastern Europe?
Culture of innovation in fiber?
Maybe fiber is more abundant in these areas, and thus there was more affordance for experimenting. (See below.)
It might be a coincidence. But it’s an odd coincidence, if so.
Why did it take so long for someone to invent knitting?
This is the question I set out to answer in the initial post, but then it turned into a whole thing and I don’t think I ever actually answered my question. Very, very speculatively: I think knitting is just so complicated that it took thousands of years, and an environment rich in fiber innovation, for someone to invent and make use of the series of steps that is knitting.
Take this next argument with a saltshaker, but: my intuitions back this up. I have a good visual imagination. I can sort of “get” how a slip knot works. I get sewing. I understand weaving, I can boil it down in my mind to its constituents.
There are birds that do a form of sewing and a form of weaving. I don’t want to imply that if an animal can figure it out, it’s clearly obvious – I imagine I’d have a lot of trouble walking if I were thrown into the body of a centipede, and chimpanzees can drastically outperform humans on certain cognitive tasks – but I think, again, it’s evidence that it’s a simpler task in some sense.
Same with sprang. It’s not a process I’m familiar with, but watching Sally Pointer do it on a very primitive loom, I can see understand it and could probably do it now. Naalbinding – well, it’s knots, and given a needle and knowing how to make a knot, I think it’s pretty straightforward to tie a bunch of knots on top of each other to make fabric out of it.
But I’ve been knitting for quite a while now and have finished many projects, and I still can’t say I totally get how knitting works. I know there’s a series of interconnected loops, but how exactly they don’t fall apart? How the starting string turns into the final project? It’s not in my head. I only know the steps.
I think that if you erased my memory and handed me some simple tools, especially a loom, I could figure out how to make cloth by weaving. I think there’s also a good chance I could figure out sprang, and naalbinding. But I think that if you handed me knitting needles and string – even if you told me I was trying to get fabric made from a bunch of loops that are looped into each other – I’m not sure I would get to knitting.
(I do feel like I might have a shot at figuring out crochet, though, which is supposedly younger than any of these anyway, so maybe this whole line of thinking means nothing.)
Idle hands as the mother of invention?
Why do we innovate? Is necessity the mother of invention?
This whole story suggests not – or at least, that’s not the whole story. We have the first knit structures in belts (already existed in other forms) and decorative silver wire (strictly ornamental.) We have knit socks from Egypt, not a place known for demanding warm foot protection. What gives?
Elizabeth Wayland Barber says this isn’t just knitting – she points to the spinning jenny and the power loom, both innovations in yarn production in general, that were invented recently by men despite thousands of previous years of women producing yarn. In Women’s Work: The First 20,000 Years, she writes:
“Women of all but the top social and economic classes were so busy just trying to get through what had to be done each day that they didn’t have excess time or materials to experiment with new ways of doing things.”
This speculates a kind of different mechanism of invention – sure, you need a reason to come up with or at least follow up on a discovery, but you also need the space to play. 90% of everything is crap, you need to be really sure that you can throw away (or unravel, or afford the time to re-make) 900 crappy garments before you hit upon the sock.
Bill Bryson, in the introduction to his book At Home, writes about the phenomenon of clergy in the UK in 1700s and 1800s. To become an ordained minister, one needed a university degree, but not in any particular subject, and little ecclesiastical training. Duties were light; most ministers read a sermon out of a prepared book once a week and that was about it. They were paid in tithes from local landowners. Bryson writes:
“Though no one intended it, the effect was to create a class of well-educated, wealthy people who had immense amounts of time on their hands. In conesquence many of them began, quite spontaneously, to do remarkable things. Never in history have a group of people engaged in a broader range of creditable activities for which they were not in any sense actually employed.”
He describes some of the great amount of intellectual work that came out of this class, including not only the aforementioned power loom, but also: scientific descriptions of dinosaurs, the first Icelandic dictionary, Jack Russel terriers, submarines aerial photography, the study of archaeology, Malthusian traps, the telescope that discovered Uranus, werewolf novels, and – courtesy of the original Thomas Bayes – Bayes’ theorem.
I offhandedly posited a random per-person effect in the previous post – each individual has a chance of inventing knitting, so eventually someone will figure it out. There’s no way this can be the whole story. A person in a culture that doesn’t make clothes mostly out of thread, like the traditional Inuit (thread is used to sew clothes, but the clothes are very often sewn out of animal skin rather than woven fabric) seems really unlikely to invent knitting. They wouldn’t have lots of thread about to mess around with. So you need the people to have a degree of familiarity with the materials. You need some spare resources. Some kind of cultural lenience for doing something nonstandard.
…But is that the whole story? The Incan Empire was enormous, with 12,000,000 citizens at its height. They didn’t have a written language. They had the quipu system for recording numbers with knotted string, but they didn’t have a written language. (Their neighbors, the Mayans, did.) Easter Island, between its colonization by humans in 1000 CE and its worse colonization by Europeans in 1700 CE, had a maximum population of maybe 12,000. It’s one of the most remote islands in the world. In isolation from other societies, they did develop a written language, in fact Polynesia’s only native written language.
One of ~26 surviving pieces of Rongorongo, the undeciphered written script of Easter Island. This is Text R, the “Small Washington tablet”. Photo from the Smithsonian Institution. (Image rotated to correspond with the correct readingorder, as a courtesy to any Rongorongo readers in my audience. Also, if there are any Rongorongo readers in my audience, please reach out. How are you doing that?!)The same tablet with the symbols slightly clearer. Image found on kohaumoto.org, a very cool Rongorongo resource.
I don’t know what to do with that.
Still. My rough model is:
The concept of this chart amused me way too much not to put it in here. Sorry.
(“Survivorship bias” meaning: I think it’s safe to say that if your culture never developed (or lost) the art of sewing, the culture might well have died off. Manipulating thread and cloth is just so useful! Same with hunting, or fishing for a small island culture, etc.)
…What do you mean Loopholes has articles about the history of the autoharp?! My Renaissance man aspirations! Help!
1 (Uh, usually. You can finger weave with just a stick or two to anchor some yarn to but it wasn’t widespread, possibly because it’s hard to make the cloth very wide.)
2 I had this whole thing ready to go about how a knit belt was ridiculous because a knit tube isn’t actually very stretchy “vertically” (or “warpwise”), and most of its stretch is “horizontal” (or “weftwise”). But then I grabbed a knit tube (fingerless glove) in my environment and measured it at rest and stretched, and it stretched about as far both ways. So I’m forced to consider that a knit belt might be reasonable thing to make for its stretchiness. Empiricism: try it yourself!
3 Fun fact: Plant-based fibers (cotton, linen, etc) are mostly made of carbohydrates. Animal-based fibers (silk, wool, alpaca, etc) and leather are mostly made of protein. Fens are wetlands that are alkaline and bogs are acidic. Carbohydrates decay in acidic bogs but are well-preserved in alkaline fens. Proteins dissolve in alkaline environments fens but last in acidic bogs. So it’s easier to find preserved animal material or fibers in bogs and preserved plant material or fibers in fens.
So you’ve heard about how fish aren’t a monophyletic group? You’ve heard about carcinization, the process by which ocean arthropods convergently evolve into crabs? You say you get it now? Sit down. Sit down. Shut up. Listen. You don’t know nothing yet.
“Trees” are not a coherent phylogenetic category. On the evolutionary tree of plants, trees are regularly interspersed with things that are absolutely, 100% not trees. This means that, for instance, either:
The common ancestor of a maple and a mulberry tree was not a tree.
The common ancestor of a stinging nettle and a strawberry plant was a tree.
And this is true for most trees or non-trees that you can think of.
I thought I had a pretty good guess at this, but the situation is far worse than I could have imagined.
CLICK TO EXPAND. Partial phylogenetic tree of various plants. TL;DR: Tan is definitely, 100% trees. Yellow is tree-like. Green is 100% not a tree. Sourced mostly from Wikipedia.
I learned after making this chart that tree ferns exist (h/t seebs), which I think just emphasizes my point further. Also, h/t kithpendragon on LW for suggestions on increasing accessibility of the graph.
Why do trees keep happening?
First, what is a tree? It’s a big long-lived self-supporting plant with leaves and wood.
Also of interest to us are the non-tree “woody plants”, like lianas (thick woody vines) and shrubs. They’re not trees, but at least to me, it’s relatively apparent how a tree could evolve into a shrub, or vice-versa. The confusing part is a tree evolving into a dandelion. (Or vice-versa.)
Wood, as you may have guessed by now, is also not a clear phyletic category. But it’s a reasonable category – a lignin-dense structure, usually that grows from the exterior and that forms a pretty readily identifiable material when separated from the tree. (…Okay, not the most explainable, but you know wood? You know when you hold something in your hand, and it’s made of wood, and you can tell that? Yeah, that thing.)
All plants have lignin and cellulose as structural elements – wood is plant matter that is dense with both of these.
Botanists don’t seem to think it only could have gone one way – for instance, the common ancestor of flowering plants is theorized to have been woody. But we also have pretty clear evidence of recent evolution of woodiness – say, a new plant arrives on a relatively barren island, and some of the offspring of that plant becomes treelike. Of plants native to the Canary Islands, wood independently evolved at least 38 times!
One relevant factor is that all woody plants do, in a sense, begin life as herbaceous plants – by and large, a tree sprout shares a lot of properties with any herbaceous plant. Indeed, botanists call this kind of fleshy, soft growth from the center that elongates a plant “primary growth”, and the later growth from towards the outside which causes a plant to thicken is “secondary growth.” In a woody plant, secondary growth also means growing wood and bark – but other plants sometimes do secondary growth as well, like potatoes in their roots.
This paper addresses the question. I don’t understand a lot of the closely genetic details, but my impression of its thesis is that: Analysis of convergently-evolved woody plants show that the genes for secondary woody growth are similar to primary growth in plants that don’t do any secondary growth – even in unrelated plants. And woody growth is an adaption of secondary growth. To abstract a little more, there is a common and useful structure in herbaceous plants that, when slightly tweaked, “dendronizes” them into woody plants.
Dendronization – Evolving into a tree-like morphology. (In the style of “carcinization“.) From ‘dendro‘, the ancient Greek root for tree.
Can this be tested? Yep – knock out a couple of genes that control flower development and change the light levels to mimic summer, and researchers found thatArabidopsis – rock cress, a distinctly herbaceous plant used as a model organism – grows a woody stem never otherwise seen in the species.
So not only can wood develop relatively easily in an herbal plant, it can come from messing with some of the genes that regulate annual behavior – an herby plant’s usual lifecycle of reproducing in warm weather, dying off in cool weather. So that gets us two properties of trees at once: woodiness, and being long-lived. It’s still a far cry from turning a plant into a tree, but also, it’s really not that far.
“Obviously, in the search for which genes make a tree versus a herbaceous plant, it would be folly to look for genes present in poplar and absent in Arabidopsis. More likely, tree forms reflect differences in expression of a similar suite of genes to those found in herbaceous relatives.”
So: There are no unique “tree” genes. It’s just a different expression of genes that plants already use. Analogously, you can make a cake with flour, sugar, eggs, sugar, butter, and vanilla. You can also make frosting with sugar, butter, and vanilla – a subset of the ingredients you already have, but in different ratios and use.
But again, the reverse also happens – a tree needs to do both primary and secondary growth, so it’s relatively easy for a tree lineage to drop the “secondary” growth stage and remain an herb for its whole lifespan, thus “poaizating.” As stated above, it’s hypothesized that the earliest angiosperms were woody, some of which would have lost that in become the most familiar herbaceous plants today. There are also some plants like cassytha and mistletoe, herbaceous plants from tree-heavy lineages, who are both parasitic plants that grow on a host tree. Knowing absolutely nothing about the evolution of these lineages, I think it’s reasonable to speculate that they each came from a tree-like ancestor but poaized to become parasites. (Evolution is very fond of parasites.)
Poaization: Evolving into an herbaceous morphology. From ‘poai‘, ancient Greek term from Theophrastus defining herbaceous plants (“Theophrastus on Herbals and Herbal Remedies”).
(I apologize to anyone I’ve ever complained to about jargon proliferation in rationalist-diaspora blog posts.)
The trend of staying in an earlier stage of development is also called neotenizing. Axolotls are an example in animals – they resemble the juvenile stages of the closely-related tiger salamander. Did you know very rarely, or when exposed to hormone-affecting substances, axolotls “grow up” into something that looks a lot like a tiger salamander? Not unlike the gene-altered Arabidopsis.
A normal axolotl (left) vs. a spontaneously-metamorphosed “adult” axolotl (right.)
A friend asked why I was so interested in this finding about trees evolving convergently. To me, it’s that a tree is such a familiar, everyday thing. You know birds? Imagine if actually there were amphibian birds and mammal birds and insect birds flying all around, and they all looked pretty much the same – feathers, beaks, little claw feet, the lot. You had to be a real bird expert to be able to tell an insect bird from a mammal bird. Also, most people don’t know that there isn’t just one kind of “bird”. That’s what’s going on with trees.
I was also interested in culinary applications of this knowledge. You know people who get all excited about “don’t you know a tomato is a fruit?” or “a blueberry isn’t really a berry?” I was one once, it’s okay. Listen, forget all of that.
There is a kind of botanical definition of a fruit and a berry, talking about which parts of common plant anatomy and reproduction the structure in question is derived from, but they’re definitely not related to the culinary or common understandings. (An apple, arguably the most central fruit of all to many people, is not truly a botanical fruit either).
Let me be very clear here – mostly, this is not what biologists like to say. When we say a bird is a dinosaur, we mean that a bird and a T. rex share a common ancestor that had recognizably dinosaur-ish properties, and that we can generally point to some of those properties in the bird as well – feathers, bone structure, whatever. You can analogize this to similar statements you may have heard – “a whale is a mammal”, “a spider is not an insect”, “a hyena is a feline”…
But this is not what’s happening with fruit. Most “fruits” or “berries” are not descended from a common “fruit” or “berry” ancestor. Citrus fruits are all derived from a common fruit, and so are apples and pears, and plums and apricots – but an apple and an orange, or a fig and a peach, do not share a fruit ancestor.
Instead of trying to get uppity about this, may I recommend the following:
Acknowledge that all of our categories are weird and a little arbitrary
Send a fruit basket to your local botanist/plant evolutionary biologist for putting up with this, or become one yourself
While natural selection is commonly thought to simply be an ongoing process with no “goals” or “end points”, most scientists believe that life peaked at Welwitschia.
Avocado and cinnamon are from fairly closely-related tree species.
It’s possible that the last common ancestor between an apple and a peach was not even a tree.
Of special interest to my Pacific Northwest readers, the Seattle neighborhood of Magnolia is misnamed after the local madrona tree, which Europeans confused with the (similar-looking) magnolia. In reality, these two species are only very distantly related. (You can find them both on the chart to see exactly how far apart they are.)
None of [cactuses, aloe vera, jade plants, snake plants, and the succulent I grew up knowing as “hens and chicks”] are related to each other.
Rubusis the genus that contains raspberries, blackberries, dewberries, salmonberries… that kind of thing. (Remember, a genus is the category just above a species – which is kind of a made-up distinction, but suffice to say, this is a closely-related groups of plants.) Some of its members have 14 chromosomes. Some of its members have 98 chromosomes.
Seriously, I’m going to hand $20 in cash to the next plant taxonomy expert I meet in person. God knows bacteriologists and zoologists don’t have to deal with this.
And I have one more unanswered question. There doesn’t seem to be a strong tend of plants evolving into grasses, despite the fact that grasses are quite successful and seem kind of like the most anatomically simple plant there could be – root, big leaf, little flower, you’re good to go. But most grass-like plants are in the same group. Why don’t more plants evolve towards the “grass” strategy?
Let’s get personal for a moment. One of my philosophical takeaways from this project is, of course, “convergent evolution is a hell of a drug.” A second is something like “taxonomy is not automatically a great category for regular usage.” Phylogenetics are absolutely fascinating, and I do wish people understood them better, and probably “there’s no such thing as a fish” is a good meme to have around because most people do not realize that they’re genetically closer to a tuna than a tuna is to a shark – and “no such thing as a fish” invites that inquiry.
(You can, at least, say that a tree is a strategy. Wood is a strategy. Fruit is a strategy. A fish is also a strategy.)
At the same time, I have this vision in my mind of a clever person who takes this meandering essay of mine and goes around saying “did you know there’s no such thing as wood?” And they’d be kind of right.
But at the same time, insisting that “wood” is not a useful or comprehensible category would be the most fascinatingly obnoxious rhetorical move. Just the pinnacle of choosing the interestingly abstract over the practical whole. A perfect instance of missing the forest for – uh, the forest for …
Towards the end of writing this piece, I found that actual botanist Dan Ridley-Ellis made a tweet thread about this topic in 2019. See that for more like this from someone who knows what they’re talking about.
TL;DR: “Infohazard” means any kind of information that could be harmful in some fashion. Let’s use “cognitohazard” to describe information that could specifically harm the person who knows it.
Some people in my circle like to talk about the idea of information hazards or infohazards, which are dangerous information. This isn’t a fictional concept – Nick Bostrom characterizes a number of different types of infohazards in his 2011 paper that introduces the term (PDF available here). Lots of kinds of information can be dangerous or harmful in some fashion – detailed instructions for making a nuclear bomb. A signal or hint that a person is a member of a marginalized group. An extremist ideology. A spoiler for your favorite TV show. (Listen, an infohazard is a kind of hazard, not a measure of intensity. A papercut is still a kind of injury!)
I’ve been in places where “infohazard” is used in the Bostromian sense casually – to talk about, say, dual-use research of concern in the biological sciences, and describe the specific dangers that might come from publishing procedures of results.
I’ve also been in more esoteric conversations where people use the word “infohazard” to talk about a specific kind of Bostromian information hazard: information that may harm the person who knows it. This is a stranger concept, but there are still lots of apparent examples – a catchy earworm. “You just lost the game.” More seriously, an easy method of committing suicide for a suicidal person. A prototypical fictional example is the “basilisk” fractal from David Langford’s 1988 short story BLIT, which kills you if you see it.
This is a subset of the original definition because it is harmful information, but it’s expected to harm the person who knows it in particular. For instance, detailed schematics for a nuclear weapon aren’t really expected to bring harm to a potential weaponeer – the danger is that the weaponeer will use them to harm others. But fully internalizing the information that Amazon will deliver you a 5-pound bag of Swedish Fish whenever you want is specifically a danger to you. (…Me.)
This disparate use of terms is confusing. I think Bostrom and his intellectual kith get the broader definition of “infohazard”, since they coined the word and are actually using it professionally.*
I propose we call the second thing – information that harms the knower – a cognitohazard.
Pictured: Instantiation of a cognitohazard. Something something red herrings.
This term is shamelessly borrowed from the SCP Foundation, which uses it in a similar way in fiction. I figure the usage can’t make the concept sound any more weird and sci-fi than it already does.
(Cognitohazards don’t have to be hazardous to everybody. Someone who hates Swedish Fish is not going to spend all their money buying bags of Swedish Fish off of Amazon and diving into them like Scrooge McDuck. For someone who loves Swedish Fish – well, no comment. I’d call this “a potential cognitohazard” if you were to yell it into a crowd with unknown opinions on Swedish Fish.)
Anyways, hope that clears things up.
* For a published track record of this usage, see: an academic paper from Future of Humanity Institute and Center for Health Security staff, another piece by Bostrom, an opinion piece by esteemed synthetic biologist Kevin Esvelt, a piece on synthetic biology by FHI researcher Cassidy Nelson, a piece by Phil Torres.
(UPDATE: The version I initially published proposed the term “memetic hazard” rather than “cognitohazard.” LessWrong commentor MichaelA kindly pointed out that “memetic hazard” already meant a different concept that better suited that name. Since I had only just put out the post, I decided to quickly backpeddle and switch out the word for another one with similar provinence. I hate having to do this, but it sure beats not doing it. Thank you, MichaelA!)
There’s a particular emotion that I felt a lot over 2019, much more than any other year. I expect it to continue in future years. That emotion is what I’m calling “algorithmic horror”.
It’s confusion at a targeted ad on Twitter for a product you were just talking about.
It’s seeing a “recommended friend” on facebook, but who you haven’t seen in years and don’t have any contact with.
It’s skimming a tumblr post with a banal take and not really registering it, and then realizing it was written by a bot.
It’s a disturbing image from ArtBreeder, dreamed up by a computer.
PIctured: a normal dog. Don’t worry about it. It’s fine.
I see this as an outgrowth of ancient, evolution-calibrated emotions. Back in the day, our lives depended on quick recognition of the signs of other animals – predator, prey, or other humans. There’s a moment I remember from animal tracking where disparate details of the environment suddenly align – the way the twigs are snapped and the impressions in the dirt suddenly resolve themselves into the idea of deer.
In the built environment of today, we know that most objects are built by human hands. Still, it can be surprising to walk in an apparently remote natural environment and find a trail or structure, evidence that someone has come this way before you. Skeptic author Michael Shermer calls this “agenticity”, the human bias towards seeing intention and agency in all sorts of patterns.
the trouble is humans are literally structured to find “a wizard did it” a more plausible explanation than things just happening by accident for no reason.
I see algorithmic horror as an extension of this, built objects masquerading as human-generated. I looked up oarfish merchandise on Amazon, to see if I could buy anything commemorating the world’s best fish, and found this hat.
It’s a bit incredible. Presumably, no #oarfish hat has ever existed. No human ever created an #oarfish hat or decided that somebody would like to buy them. Possibly, nobody had ever even viewed the #oarfish hat listing until I stumbled onto it.
In a sense this is just an outgrowth of custom-printing services that have been around for decades, but… it’s weird, right? It’s a weird ecosystem.
But human involvement can be even worse. All of those weird Youtube kid’s videos were made by real people. Many of them are acted out by real people. But they were certainly done to market to children, on Youtube, and named and designed in order to fit into a thoughtless algorithm. You can’t tell me that an adult human was ever like “you know what a good artistic work would be?” and then made “Learn Colors Game with Disney Frozen, PJ Masks Paw Patrol Mystery – Spin the Wheel Get Slimed” without financial incentives created by an automated program.
If you want a picture of the future, imagine a faceless adult hand pulling a pony figurine out of a plastic egg, while taking a break between cutting glittered balls of playdoh in half, silent while a prerecorded version of Skip To My Lou plays in the background, forever.
Everything discussed so far is relatively inconsequential, foreshadowing rather than the shade itself. But algorithms are still affecting the world and harming people now – setting racially-biased bail in Kentucky, potentially-biased hiring decisions, facilitating companies recording what goes on in your home, even career Youtubers forced to scramble and pivot as their videos become more or less recommended.
To be clear, algorithms also do a great deal of good – increasing convenience and efficiency, decreasing resource consumption, probably saving lives a well. I don’t mean to write this to say “algorithms are all-around bad”, or even “algorithms are net bad”. Sometimes it’s solely with good intentions, but it still sounds incredibly creepy, like how Facebook is judging how suicidal all of its users are.
This is an elegant instance of Goodhart’s Law. Goodhart’s Law says that if you want a certain result and issue rewards for a metric related to the result, you’ll start getting optimization for the metric rather than the result.
The Youtube algorithm – and other algorithms across the web – are created to connect people with content (in order to sell to advertisers, etc.) Producers of content want to attract as much attention as possible to sell their products.
But the algorithms just aren’t good enough to perfectly offer people the online content they want. They’re simplified, relies on keywords, can be duped, etcetera. And everyone knows that potential customers aren’t going to trawl through the hundreds of pages of online content themselves for the best “novelty mug” or “kid’s video”. So a lot of content exists, and decisions are made, that fulfill the algorithm’s criteria rather than our own.
In a sense, when we look at the semi-coherent output of algorithms, we’re looking into the uncanny valley between the algorithm’s values and our own.
We live in strange times. Good luck to us all for 2020.
Aside from its numerous forays into real life, algorithmic horror has also been at the center of some stellar fiction. See:
This is is an internet mystery that is now mostly defunct. I’m going to write it down here anyways in case someone can tell me what was going on, or will be able to in the future.
UPDATE, 2020-05-17: UrracaWatch briefly went back up briefly in December 2020. It is down again, but this time I was able to capture a version on The Internet Archive. Here’s a link to that archived version.
The posts on these accounts had a few things in common:
Links to apparently random web pages related to chemical weapons, biological weapons, or health care
These links are routed through “UrracaWatch.com” before leading to the final link
No commentary
The accounts also
have a few other properties:
Real-sounding usernames and display names
No other posts on the account
I tried reverse-image-searching a couple account-related images and didn’t see anything. James Diggans on Twitter tried doing the same for account profile photos (of people) and also didn’t find results.
The choice of websites linked were very strange. They looked like someone searched for various chem/bioweapon/health-related words, then chose random websites from the first page or two of search results. Definition pages, scholarly articles, products (but all from very different websites.)
Tweets from one of the UrracaWatch Twitter accounts.Tweets from one of the UrracaWatch Twitter accounts.
Some example UrracaWatch bot account handles: DeterNoBoom, fumeFume31, ChemOrRiley, ChristoBerk, BioWeaP0n, ScienceGina, chempower2112, ChemistWannabe. All of these looked exactly like the Mark Davis @ChemPower2112. (Sidenote: I really wish I had archived these more properly. If you find an internet mystery you might want to investigate later, save all the pages right away. You’re just going to have to take me on faith. Alternatively, if you have more screenshots of any of these websites or accounts, please send them to me.)
if this actually is weird psy-op propaganda, I think “Holly England @VaxyourKid” represents a rare example of pro-vaccination English propaganda, as opposed to the more common anti-vaccination propaganda. Also, not to side with the weird maybe-psy-op, but vaccinate your kids.
And here are some facts about the (now-defunct) website UrracaWatch:
The website had a very simple format – a list of links (the same kinds of bio/chem/health links that end up on the twitter pages), and a text bar at the top for entering new links.
(I tried using it to submit a link and didn’t see an immediate new entry on the page.)
There were no advertisements, information on the creators, other pages, etc.
According to the page source code and the tracker- and cross-request-detecting Firefox app Ghostery, there were no trackers, counters, advertisers, or any other complexity on the site.
According to the ICANN registry, the domain UrracaWatch.com was registered 9-17-2018 via GoDaddy. The domain has now expired as of 9-17-2019, probably as part of a 12-month domain purchase.
Urraca is a spanish word for magpie, which was a messenger of death in the view of the Anasazi people. (The messenger of death part probably isn’t relevant here, but they mention the word as part of a real-life spooky historical site in The Black Tapes Podcast, and this added an unavoidable sinister flavor.) (Urraca is also a woman’s name.)
As far as I can tell, nobody aside from these twitterbots have ever linked to or used UrracaWatch.com for anything at all, anywhere on the web.
By and large, the twitterbots – and I think they must be bots – have been banned. The website is down.
But come on, what on earth was UrracaWatch?
Some possibilities:
Advertisement scheme
Test of some kind of Twitter-scraping link- or ad-bot that happened to focus on the biodefense community on twitter for some reason
Weird psy-op
I’m dubious of the advertisement angle. I’ve been enjoying a lot of the podcast Reply All lately, especially their episodes on weird scams. There’s an interesting point made in my favorite episode (The Case of the Phantom Caller) in dissecting a weird communication, which I asked myself here – I just can’t see how anyone is making money off of this. Again, there were occasional product links, but they were to all different websites that looked like legitimate stores, and I don’t think I ever saw multiple links to the same store.
That leaves “bot test” and “weird psy-op”, or something I haven’t thought of yet. If it was propaganda, it wasn’t very good. If you have a guess about what was going on, let me know.
Epistemic status: Speculative, just having fun. This piece isn’t well-cited, but I can pull up sources as needed – nothing about mole-rats is my original research. A lot of this piece is based on Wikipedia.
When I wrote about “weirdness” in the past, I called marine invertebrates, archaea viruses, and Florida Man stories “predictably weird”. This means I wasn’t really surprised to learn any new wild fact about them. But there’s a sense in which marine invertebrates both are and aren’t weird. I want to try operationalizing “weirdness” as “amount of unpredictability or diversity present in a class” (or “in an individual”) compared to other members of its group.
Invertebrates represent most of the strategies that animals have attempted on earth, and certainly most of the animals on earth. Vertebrates are the odd ones out.
But you know which animals are profoundly weird, no matter which way you look at it? Naked mole rats. Naked mole-rats have like a dozen properties that are not just unusual, not just strange, but absolutely batshit. Let’s review.
1. They don’t age
What? Well, for most animals, their chance of dying goes up over time. You can look at a population and find something like this:
Mole-rats, they have the same chance of dying at any age. Their graph looks like this:
They’re joined, more or less, by a few species of jellyfish, flatworms, turtles, lobsters, and at least one fish.
They’re hugely long-lived compared to other rodents, seen in zoos at 30+ years old compared to the couple brief years that rats get.
2. They don’t get cancer
Cancer generally seems to be the curse of multicellular beings, but naked mole-rats are an exception. A couple mole-rats have developed cancer-like growths in captivity, but no wild mole-rat has ever been found with cancer.
Definitely unique among mammals. Like bees, ants, and termites, naked mole-rats have a single breeding “queen” in each colony, and other “worker” individuals exist in castes that perform specific tasks. In an evolutionary sense, this means that the “unit of selection” for the species is the queen, not any individual – the queen’s genes are the ones that get passed down.
They’re also a fascinating case study of an animal whose existence was deduced before it was proven. Nobody knew about eusocial mammals for a long time. In 1974, entomologist Richard Alexander, who studied eusocial insects, wrote down a set of environmental characteristics he thought would be required for a eusocial mammal to evolve. Around 1981 and the next decade, naked mole-rats – a perfect match for his predictions – were found to be eusocial.
5. They don’t have fur
Obviously. But aside from genetic flukes or domesticated breeds, that puts them in a small unlikely group with only some marine mammals, rhinoceros, hippos, elephants, one species of boar, and… us.
You and this entity have so much in common.
6. They’re able to survive ridiculously low oxygen levels
It uses very little oxygen during normal metabolism, much less than comparable-sized rodents, and it can survive for hours at 5% oxygen (a quarter of normal levels.)
7. Their front teeth move back and forth like chopsticks
I’m not actually sure how common this is in rodents. But it really weirded me out.
They have basically no ability to adjust their body temperature internally, perhaps because their caves tend to be rather constant temperatures. If they need to be a different temperature, they can huddle together, or move to a higher or lower level in their burrow.
All of this makes me think that mole-rats must have some underlying unusual properties which lead to all this – a “weirdness generator”, if you will.
A lot of these are connected to the fact that mole rats spend almost their entire lives underground. There are lots of burrowing animals, but “almost their entire” is pretty unusual – they don’t surface to find food, water, or (usually) mates. (I think they might only surface when digging tunnels and when a colony splits.) So this might explain (8) – no need for a sleep schedule when you can’t see the sun. It also seems to explain (5) and (9), because thermoregulation is unnecessary when they’re living in an environment that’s a pretty constant temperature.
It probably explains (6) because lower burrow levels might have very little oxygen most of the time, although there’s some debate about this – their burrows might actually be pretty well ventilated.
And Richard Alexander’s 12 postulates that would lead to a eusocial vertebrate – plus some other knowledge of eusociality – suggests that this underground climate, when combined with the available lifestyle and food source of a molerat, should lead to eusociality.
It might also be the source of (2) and (3) – people have theorized that higher CO2 or lower oxygen levels in burrows might reduce DNA damage or related to neuron function or something. (This would also explain why only mole-rats in captivity have had tumors, since they’re kept at atmospheric oxygen levels.) These still seem to be up in the air, though. Mole-rats clearly have a variety of fascinating biochemical tricks that are still being understood.
So there’s at least one “weirdness generator” that leads to all of these strange mole-rat properties. There might be more.
I’m pretty sure it’s not the chopstick teeth (7), at least – but as with many predictions one could make about mole rats, I could easily be wrong.
Here’s a pattern I’d like to be able to talk about. It might be known under a certain name somewhere, but if it is, I don’t know it. I call it a Spaghetti Tower. It shows up in large complex systems that are built haphazardly.
Someone or something builds the first Part A.
Later, someone wants to put a second Part B on top of Part A, either out of convenience (a common function, just somewhere to put it) or as a refinement to Part A.
Now, suppose you want to tweak Part A. If you do that, you might break Part B, since it interacts with bits of Part A. So you might instead build Part C on top of the previous ones.
And by the time your system looks like this, it’s much harder to tell what changes you can make to an earlier part without crashing some component, so you’re basically relegated to throwing another part on top of the pile.
I call these spaghetti towers for two reasons: One, because they tend to quickly take on circuitous knotty tangled structures, like what programmers call “spaghetti code”. (Part of the problem with spaghetti code is that it can lead to spaghetti towers.)
Especially since they’re usually interwoven in multiple dimensions, and thus look more like this:
“Can you just straighten out the yellow one without touching any of the others? Thanks.”
Second, because shortsightedness in the design process is a crucial part of spaghetti machines. In order to design a spaghetti system, you throw spaghetti against a wall and see if it sticks. Then, when you want to add another part, you throw more spaghetti until it sticks to that spaghetti. And later, you throw more spaghetti. So it goes. And if you decide that you want to tweak the bottom layer to make it a little more useful – which you might want to do because, say, it was built out of spaghetti – without damaging the next layers of gummy partially-dried spaghetti, well then, good luck.
Note that all systems have load-bearing, structural pieces. This does not make them spaghetti towers. The distinction about spaghetti towers is that they have a lot of shoddily-built structural components that are completely unintentional. A bridge has major load-bearing components – they’re pretty obvious, strong, elegant, and efficiently support the rest of the structure. A spaghetti tower is more like this.
The motto of the spaghetti tower is “Sure, it works fine, as long as you never run lukewarm water through it and turn off the washing machine during thunderstorms.” || Image from the always-delightful r/DiWHY.
Where do spaghetti towers appear?
Basically all of biology works like this. Absolutely all of evolution is made by throwing spaghetti against walls and seeing what sticks. (More accurately, throwing nucleic acid against harsh reality and seeing what successfully makes more nucleic acid.) We are 3.5 billion years of hacks in fragile trench coats.
Scott Star Codex describes the phenomenon in neurotransmitters, but it’s true for all of molecular biology:
You know those stories about clueless old people who get to their Gmail account by typing “Google” into Bing, clicking on Google in the Bing search results, typing “Gmail” into Google, and then clicking on Gmail in the Google search results?
I am reading about serotonin transmission now, and everything in the human brain works on this principle. If your brain needs to downregulate a neurotransmitter, it’ll start by upregulating a completely different neurotransmitter, which upregulates the first neurotransmitter, which hits autoreceptors that downregulate the first neurotransmitter, which then cancel the upregulation, and eventually the neurotransmitter gets downregulated.
Meanwhile, my patients are all like “How come this drug that was supposed to cure my depression is giving me vision problems?” and at least on some level the answer is “how come when Bing is down your grandfather can’t access Gmail?
My programming friends tell me that spaghetti towers are near-universal in the codebases of large companies. Where it would theoretically be nice if every function was neatly ordered, but actually, the thing you’re working on has three different dependencies, two of which are unmaintained and were abandoned when the guy who built them went to work at Google, and you can never be 100% certain that your code tweak won’t crash the site.
I think this also explains some of why bureaucracies look and act the way they do, and are so hard to change.
I think there are probably a lot of examples of spaghetti towers, and they probably have big ramifications for things like, for instance, what systems evolution can and can’t build.
I want to do a much deeper and more thoughtful analysis about what exactly the implications here are, but this has been kicking around my brain for long enough and all I want to do is get the concept out there.
Does this feel like a meaningful concept? Where do you see spaghetti towers?
[EDIT: Previous version of this post had some errors. Thanks for jeff8765 for pinpointing the error and esrogs in the comments for bringing it to my attention as well. This has been fixed. Also, I wrote FHI when I meant FLI.]
The graph of the human population over time is also a map of human experience. Think of each year as being “amount of human lived experience that happened this year.” On the left, we see the approximate dawn of the modern human species in 50,000 BC. On the right, the population exploding in the present day.
It turns out that if you add up all these years, 50% of human experience has happened after 1309 AD. 15% of all experience has been experienced by people who are alive right now.
I call this “the funnel of human experience” – the fact that because of a tiny initial population blossoming out into a huge modern population, more of human experience has happened recently than time would suggest.
50,000 years is a long time, but 8,000,000,000 people is a lot of people.
Early human experience: casts of the skulls of the earliest modern humans found in various continents. Display at the Smithsonian Museum of National History.
If you want to expand on this, you can start doing some Fermi estimates. We as a species have spent…
1,650,000,000,000 total “human experience years”
See my dataset linked at the bottom of this post.
7,450,000,000 human years spent having sex
Humans spend 0.45% of our lives having sex. 0.45% * [total human experience years] = 7E9 years
52,000,000,000 years spent drinking coffee
500 billion cups of coffee drunk this year x 15 minutes to drink each cup x 100 years* = 5E10 years
*Coffee consumption has likely been much higher recently than historically, but it does have a long history. I’m estimating about a hundred years of current consumption for total global consumption ever.
1,000,000,000 years spent in labor
110,000,000,000 billion humans ever x ½ women x 12 pregnancies* x 15 hours apiece = 1.1E9 years
*Infant mortality, yo. H/t Ellie and Shaw for this estimate.
417,000,000 years spent worshipping the Greek gods
1000 years* x 10,000,000 people** x 365 days a year x 1 hour a day*** = 4E8 years
*Some googling suggested that people worshipped the Greek/Roman Gods in some capacity from roughly 500 BC to 500 AD.
**There were about 10 million people in Ancient Greece. This probably tapered a lot to the beginning and end of that period, but on the other hand worship must have been more widespread than just Greece, and there have been pagans and Hellenists worshiping since then.
***Worshiping generally took about an hour a day on average, figuring in priests and festivals? Sure.
30,000,000 years spent watching Netflix
14,000,000 hours/day* x 365 days x 5 years** = 2.92E7 years
* Netflix users watched an average of 14 million hours of content a day in 2017.
**Netflix the company has been around for 10 years, but has gotten bigger recently.
50,000 years spent drinking coffee in Waffle House
So humanity in aggregate has spent about ten times as long worshiping the Greek gods as we’ve spent watching Netflix.
We’ve spent another ten times as long having sex as we’ve spent worshipping the Greek gods.
And we’ve spent ten times as long drinking coffee as we’ve spent having sex.
I’m not sure what this implies. Here are a few things I gathered from this:
1) I used to be annoyed at my high school world history classes for spending so much time on medieval history and after, when there was, you know, all of history before that too. Obviously there are other reasons for this – Eurocentrism, the fact that more recent events have clearer ramifications today – but to some degree this is in fact accurately reflecting how much history there is.
On the other hand, I spent a bunch of time in school learning about the Greek Gods, a tiny chunk of time learning about labor, and virtually no time learning about coffee. This is another disappointing trend in the way history is approached and taught, focusing on a series of major events rather than the day-to-day life of people.
2) The Funnel gets more stark the closer you move to the present day. Look at science. FLI reports that 90% of PhDs that have ever lived are alive right now. That means most of all scientific thought is happening in parallel rather than sequentially.
3) You can’t use the Funnel to reason about everything. For instance, you can’t use it to reason about extended evolutionary processes. Evolution is necessarily cumulative. It works on the unit of generations, not individuals. (You can make some inferences about evolution – for instance, the likelihood of any particular mutation occurring increases when there are more individuals to mutate – but evolution still has the same number of generations to work with, no matter how large each generation is.)
4) This made me think about the phrase “living memory”. The world’s oldest living person is Kane Tanaka, who was born in 1903. 28% of the entirety of human experience has happened since her birth. As mentioned above, 15% has been directly experienced by living people. We have writing and communication and memory, so we have a flawed channel by which to inherit information, and experiences in a sense. But humans as a species can only directly remember as far back as 1903.