Category Archives: here’s something weird

An old knit tube with colorful stripes

Who invented knitting? The plot thickens

Last time on Eukaryote Writes Blog: You learned about knitting history.

You thought you were done learning about knitting history? You fool. You buffoon. I wanted to double check some things in the last post and found out that the origins of knitting are even weirder than I guessed.

Humans have been wearing clothes to hide our sinful sinful bodies from each other for maybe about 20,000 years. To make clothes, you need cloth. One way to make cloth is animal skin or membrane, that is, leather. If you want to use it in any complicated or efficient way, you also need some way to sew that – very thin strips of leather, or taking sinew or plant fiber and spinning it into thread. Also popular since very early on is taking that thread, and turning it into cloth. There are a few ways to do this.

A drawing showing loose fiber, which turns into twisted thread, which is arranged in various ways to make different kinds of fabric structures. Depicted are the structures for: naalbound, woven, knit, looped, and twined fabric.
By the way, I’m going to be referring to “thread” and “yarn” interchangeably from here on out. Don’t worry about it.

(Can you just sort of smush the fiber into cloth without making it into thread? Yes. This is called felting. How well it works depends on the material properties of the fiber. A lot of traditional Pacific Island cloth was felted from tree bark.)

Now with all of these, you could probably make some kind of cloth by taking threads and, by hand, shaping them into these different structures. But that sounds exhausting and nobody did that. Let’s get tools involved. These different structures correspond to some different kind of manufacturing technique.

By far, the most popular way of making cloth is weaving. Everyone has been weaving for tens of thousands of years. It’s not quite a cultural universal but it’s damn close. To weave, you need a loom.1 There are ten million kinds of loom. Most primitive looms can make a piece of cloth that is, at most, the size of the loom. So if you want to make a tunic that’s three feet wide and four feet long, you need cloth that’s at least three feet wide and four feet long, and thus, a loom that’s at least three feet wide and four feet long. You can see how weaving was often a stationary affair.

Recap

Here’s what I said in the last post: Knitting is interesting because the manufacturing process is pretty simple, needs simple tools, and is portable. The final result is also warm and stretchy, and can be made in various shapes (not just flat sheets). And yet, it was invented fairly recently in human history.

I mostly stand by what I said in the last post. But since then I’ve found some incredible resources, particularly the scholarly blogs Loopholes by Cary “stringbed” Karp and Nalbound by Anne Marie Deckerson, which have sent me down new rabbit-holes. The Egyptian knit socks I outlined in the last post sure do seem to be the first known knit garments, like, a piece of clothing that is meant to cover your body. They’re certainly the first known ones that take advantage of knitting’s unique properties: of being stretchy, of being manufacturable in arbitrary shapes. The earliest knitting is… weirder.

SCA websites

Quick sidenote – I got into knitting because, in grad school, I decided that in the interests of well-roundedness and my ocular health, I needed hobbies that didn’t involve reading research papers. (You can see how far I got with that). So I did two things: I started playing the autoharp, and I learned how to knit. Then, I was interested in the overlap between nerds and handicrafts, so a friend in the Society for Creative Anachronism pitched me on it and took me to a coronation. I was hooked. The SCA covers “the medieval period”; usually, 1000 CE through 1600 CE.

I first got into the history of knitting because I was checking if knitting counted as a medieval period art form. I was surprised to find that the answer was “yes, but barely.” As I kept looking, a lot of the really good literature and analysis – especially experimental archaeology – came out of blogs of people who were into it as a hobby, or perhaps as a lifestyle that had turned into a job like historical reenactment. This included a lot of people in the SCA, who had gone into these depths before and just wrote down what they found and published it for someone else to find. It’s a really lovely knowledge tradition to find one’s self a part of.

Aren’t you forgetting sprang?

There’s an ancient technique that gets some of the benefits of knitting, which I didn’t get to in the last post. It’s called sprang. Mechanically, it’s kind of like braiding. Like weaving, sprang requires a loom (the size of the cloth it produces) and makes a flat sheet. Like knitting, however, it’s stretchy.

Sprang shows up in lots of places – the oldest in 1400 BCE in Denmark, but also other places in Europe, plus (before colonization!): Egypt, the Middle East, centrals Asia, India, Peru, Wisconsin, and the North American Southwest. Here’s a video where re-enactor Sally Pointer makes a sprang hairnet with iron-age materials.

Despite being widespread, it was never a common way to make cloth – everyone was already weaving. The question of the hour is: Was it used to make socks?

Well, there were probably sprang leggings. Dagmar Drinkler has made historically-inspired sprang leggings, which demonstrate that sprang colorwork creates some of the intricate designs we see painted on Greek statues – like this 480 BCE Persian archer.

I haven’t found any attestations of historical sprang socks. The Sprang Lady has made some, but they’re either tube socks or have separately knitted soles.

Why weren’t there sprang socks? Why didn’t sprang, widespread as it is, take on the niche that knitting took?

I think there are two reasons. One, remember that a sock is a shaped-garment, tube-like, usually with a bend at the heel, and that like weaving, sprang makes a flat sheet. If you want another shape, you have to sew it in. It’s going to lose some stretch where it’s sewn at the seam. It’s just more steps and skills than knitting a sock.

The second reason is warmth. I’ve never done sprang myself – from what I can tell, it has more of a net-like openness upon manufacture, unlike knitting which comes with some depth to it. Even weaving can easily be made pretty dense simply by putting the threads close together. I think, overall, a sprang fabric garment made with primitive materials is going to be less warm than a knit garment made with primitive materials.

Those are my guesses. I bring it up merely to note that there was another thread → cloth technique that made stretchy things that didn’t catch on the same way knitting did. If you’re interested in sprang, I cannot recommend The Sprang Lady’s work highly enough.

Anyway, let’s get back to knitting.

Knitting looms

The whole thing about roman dodecahedrons being (hypothetically) used to knit glove fingers, described in the last post? I don’t think that was actually the intended purpose, for the reasons I described re: knitting wasn’t invented yet. But I will cop to the best argument in its favor, which is that you can knit with glove fingers with a roman dodecahedron.

“But how?” say those of you not deeply familiar with various fiber arts. “That’s not needles,” you say.

You got me there. This is a variant of a knitting loom. A knitting loom is a hoop with pegs to make knit tubes. This can be the basis of a knitting machine, but you can also knit on one on its own.. They make more consistent knit tubes with less required hand-eye coordination. (You can also make flat panels with them, especially a version called a knitting rake, but since all of the early knitting we’re talking about are tubes anyhow, let’s ignore that for the time being.)

Knitting on a modern knitting loom. || Photo from Cynthia M. Parker on flickr, under a CC BY-SA 2.0 license.

Knitting on a loom is also called spool knitting (because you can use a spool with nails in it as the loom for knitting a cord) and tomboy knitting (…okay). Structurally, I think this is also basically the same thing as lucet cord-making, so let’s go ahead and throw that in with this family of techniques. (The earliest lucets are from ~1000 CE Viking Sweden and perhaps medieval Viking Britain.)

The important thing to note is that loom knitting makes a result that is, structurally, knit. It’s difficult to tell whether a given piece is knit with a loom or needles, if you didn’t see it being made. But since it’s a different technique, different aspects become easier or harder.

A knitting loom sounds complicated but isn’t hard to make, is the thing. Once you have nails, you can make one easily by putting them in a wood ring. You could probably carve one from wood with primitive tools. Or forge one. So we have the question: Did knitting needles or knitting looms come first?

We actually have no idea. There aren’t objects that are really clearly knitting needles OR knitting looms until long after the earliest pieces of knitting. This strikes me as a little odd, since wood and especially metal should preserve better than fabric, but it’s what we’ve got. It’s probably not helped by the fact that knitting needles are basically just smooth straight sticks, and it’s hard to say that any smooth straight stick is conclusively a knitting needle (unless you find it with half a sock still on it.)

(At least one author, Isela Phelps, speculates that finger-knitting, which uses the fingers of one hand like a knitting loom and makes a chunky knit ribbon, came first – presumably because, well, it’s easier to start from no tools than to start from a specialized tool. This is possible, although the earliest knit objects are too fine and have too many stitches to have been finger-knit. The creators must have used tools.)

(stringbed also points out that a piece of whale baleen can be used as circular knitting needles, and that the relevant cultures did have access to and trade in whale parts. Although while we have no particular evidence that they were used as such, it does mean that humanity wouldn’t have to invent plastic before inventing the circular knitting needle, we could have had that since the prehistoric period. So, I don’t know, maybe it was whales.)

THE first knitting

The earliest knit objects we have… ugh. It’s not the Egyptian socks. It’s this.

Photo of an old, long, thin knit tube in lots of striped colors.
One of the oldest knit objects. || Photo from Musée du Louvre, AF 6027.

There are a pair of long, thin, colorful knit tubes, about an inch wide, a few feet long. They’re pretty similar to each other. Due to the problems inherent in time passing and the flow of knowledge, we know one of them is probably from Egypt, and was carbon-dated to 425-594 CE. The other quite similar tube, of a similar age, has not been carbon dated but is definitely from Egypt. (The original source text for this second artifact is in German, so I didn’t bother trying to find it, and instead refer to stringbed’s analysis. See also matthewpius guestblogging on Loopholes.) So between the two of them, we have a strong guess that these knit tubes were manufactured in Egypt around 425-594 CE, about 500 years before socks.

People think it was used as a belt.

This is wild to me. Knitting is stretchy, and I did make fun of those peasants in 1300 CE for not having elastic waistlines, so I could see a knitted belt being more comfortable than other kinds of belts.2 But not a lot better. A narrow knit belt isn’t going to be distribute most of the force onto the body too differently than a regular non-stretchy belt, and regular non-stretchy belts were already in great supply – woven, rope, leather, etc. Someone invented a whole new means of cloth manufacture and used it to make a thing that existed slightly differently.

Then, as far as I can tell, there are no knit objects in the known historical record for five hundred years until the Egyptian socks pop up.

Pulling objects out of the past is hard. Especially things made from cloth or animal fibers, which rot (as compared to metal, pottery, rocks, bones, which last so long that in the absence of other evidence, we name ancient cultures based on them.) But every now and then, we can. We’ve found older bodies and textiles preserved in ice and bogs and swamps.3 We have evidence of weaving looms and sewing needles and pictures of people spinning or weaving cloth and descriptions of them doing it, from before and after. I’m guessing that the technology just took a very long time to diversify beyond belts.

Speaking of which: how was the belt made? As mentioned, we don’t find anything until much later that is conclusively a knitting needle or a knitting loom. The belts are also, according to matthewpius on loopholes, made with a structure called double knitting. The effect is (as indicated by Pallia – another historic reenactor blog!) kind of hard to do with knitting needles in the way they achieved it, but pretty simple to do with a knitting loom.

(Another Egyptian knit tube belt from an unclear number of centuries later.)

Viking knitting

You think this is bad? Remember before how I said knitting was a way of manufacturing cloth, but that it was also definable as a specific structure of a thread, that could be made with different methods?

The oldest knit object in Europe might be a cup.

Photo of a richly decorated old silver cup.
The Ardagh Chalice. || Photo by Sailko under a CC BY-SA 3.0 license.

You gotta flip it over.

Another photo of the ornate chalice from the equally ornate bottom. Red arrows point to some intricate wire decorations around the rim.
Underside of the Ardagh Chalice. || Adapted from a Metroplitan Museum image.

Enhance.

Black and white zoom in on the wire decorations. It's more  clearly a knit structure.
Photo from Robert M. Organ’s 1963 article “Examination of the Ardagh Chalice-A Case History”, where they let some people take the cup apart and put it back together after.

That’s right, this decoration on the bottom of the Ardagh Chalice is knit from wire.
Another example is the decoration on the side of the Derrynaflen Paten, a plate made in 700 or 800 CE in Ireland. All the examples seem to be from churches, hidden by or from Vikings. Over the next few hundred years, there are some other objects in this technique. They’re tubes knitted from silver wire. “Wait, can you knit with wire?” Yes. Stringbed points out that knitting wire with needles or a knitting loom would be tough on the valuable silver wire – they could break or distort it.

Photo of an ornate silver plate with gold decorations. There are silver knit wire tubes around the edge.
The Derrynaflen Patten, zoomed in on the knit decorations at the end. || Adapted from this photo by Johnbod, under a CC BY-SA 3.0 license.

What would make sense to do it with is a little hook, like a crochet hook. But that would only work on wire – yarn doesn’t have the structural integrity to be knit with just a hook, you need to support each of the active loops.

So was the knit structure just invented separately by Viking silversmiths, before it spread to anyone else? I think it might have been. It’s just such a long time before we see knit cloth, and we have this other plausible story for how the cloth got there.

(I wondered if there was a connection between the Viking knitting and their sources of silver. Vikings did get their silver from the Islamic world, but as far as I can tell, mostly from Iran, which is pretty far from Egypt and doesn’t have an ancient knitting history – so I can’t find any connection there.)

The Egyptian socks

Let’s go back to those first knit garments (that aren’t belts), the Egyptian knit blue-and-white socks. There are maybe a few dozen of these, now found in museums around the world. They seem to have been pulled out of Egypt (people think Kustat) by various European/American collectors. People think that they were made around 1000-1300 AD. The socks are quite similar: knit, made of cotton, in white and 1-3 shades of indigo, geometric designs sometimes including Kufic characters.

I can’t find a specific origin location (than “probably Egypt, maybe Kustat?”) for any of them. The possible first sock mentioned in the last post is one of these – I don’t know if there are any particular reasons for thinking that sock is older than the others.

This one doesn’t seem to be knit OR naalbound. Anne Marie Decker at Nalbound.com thinks it’s crocheted and that the date is just completely wrong. To me, at least, this cast doubts on all the other dates of similar-looking socks.

That anomalous sock scared me. What if none of them had been carbon-dated? Oh my god, they’re probably all scams and knitting was invented in 1400 and I’m wrong about everything. But I was told in a historical knitting facebook group that at least one had been dated. I found the article, and a friend from a minecraft discord helped me out with an interlibrary loan. I was able to locate the publication where Antoine de Moor, Chris Verhecken-Lammens and Mark Van Strydonck did in fact carbon-date four ancient blue-and-white knit cotton socks and found that they dated back to approximately 1100 CE – with a 95% chance that they were made somewhere between 1062 and 1149 CE. Success!

Helpful research tip: for the few times when the SCA websites fail you, try your facebook groups and your minecraft discords.

Estonian mitten

Photo of a tattered old fragment of knitting. There are some colored designs on it in blue and red.
Yeah, this is all of it. Archeology is HARD. [Image from Anneke Lyffland’s writeup.]

Also, here’s a knit fragment of a mitten found in Estonia. (I don’t have the expertise or the mitten to determine it myself, but Anneke Lyffland (another SCA name), a scholar who studied one is aware of cross-knit-looped naalbinding – like the Peruvian knit-lookalikes mentioned in the last post – and doesn’t believe this was naalbound.) It was part of a burial that was dated from 1238 – 1299 CE. This is fascinating and does suggest a culture of knitted practical objects, in Eastern Europe, in this time period. This is the earliest East European non-sock knit fabric garment that I’m aware of.

But as far as I know, this is just the one mitten. I don’t know much about archaeology in the area and era, and can’t speculate as to whether this is evidence that knitting was rare or whether we have very few wool textiles from the area and it’s not that surprising. (The voice of shoulder-Thomas-Bayes says: Lots of things are evidence! Okay, I can’t speculate as to whether it’s strong evidence, are you happy, Reverend Bayes?) Then again, a bunch of speculation in this post is also based on two maybe-belts, so, oh well. Take this with salt.

By the way, remember when I said crochet was super-duper modern, like invented in the 1700s?

Literally a few days ago, who but the dream team of Cary “stringbed” Karp and Anne Marie Decker published an article in Archaeological Textiles Review identifying several ancient probably-Egyptian socks thought to be naalbound as being actually crocheted.

This comes down to the thing about fabric structures versus techniques. There’s a structure called slip stitch that can be either crocheted or naalbound. So since we know naalbinding is that old, so if you’re looking at an old garment and see slip stitch, maybe you say it was naalbound. But basically no fabric garment is just continuous structure all the way through. How do the edges work? How did it start and stop? Are there any pieces worked differently, like the turning of a heel or a cuff or a border? Those parts might be more clearly worked with crochet hook than a naalbinding needle. And indeed, that’s what Karp and Decker found. This might mean that those pieces are forgeries – no carbon dating. But it might mean crochet is much much older than previously thought.

My hypothesis

Knitting was invented sometime around or perhaps before 600 CE in Egypt.

From Egypt, it spreads to other Muslim regions.

It spread into Europe via one or more of these:

  1. Ordinary cultural diffusion northwards
  2. Islamic influence in the Iberian Peninsula
    • In 711 CE, Al-Andalus was conquered by the Umayyad Caliphate…
      • Kicking off a lot of Islamic presence in and control over the area up until 1400 CE or so…
  3. Meanwhile, starting in 1095 CE, the Latin Church called for armies to take Jerusalem from the Byzantines, kicking off the Crusades.
    • …Peppering Arabic influences into Europe, particularly France, over the next couple centuries.

… Also, the Vikings were there. They separately invented the knitting structure in wire, but never got around to trying it out in cloth, perhaps because the required technique was different.

Another possibility

Wrynne, AKA Baronness Rhiall of Wystandesdon (what did I say about SCA websites?), a woman who knows a thing or two about socks, believes that based on these plus the design of other historical knit socks, the route goes something like:

??? points to Iran, which points to: A. Eastern Europe, then to 1. Norway and Sweeden and 2. Russia. B. to ???, to Spain, to Western Europe.

I don’t know enough about socks to have an sophisticated opinion on her evidence, but the reasoning seems solid to me. For instance, as she explains, old Western European socks are knit from the cuff of the sock down, whereas old Middle Eastern and East European socks are knit from the toe of the sock up – which is also how Eastern and Northern European naalbound socks were shaped. Baronness Rhiall thinks Western Europe invented its sockmaking techniques independently based only having had a little experience with a few late 1200s/1300s knit pieces from Moorish artisans.

What about tools?

Here’s my best guess: The Egyptian tubes were made on knitting looms.

The viking tubes were invented separately, made with a metal hook as stringbed speculates, and never had any particular connection to knitting yarn.

At some point, in the Middle East, someone figured out knitting needles. The Egyptian socks and Estonian mitten and most other things were knit in the round on double-ended needles.

I don’t like this as an explanation, mostly because of how it posits 3 separate tools involved in the earliest knit structures – that seems overly complicated. But it’s what I’ve got.

Knitting in the tracks of naalbinding

I don’t know if this is anything, but here are some places we also find lots of naalbinding, beginning from well before the medieval period: Egypt. Oman. The UAE. Syria. Israel. Denmark. Norway. Sweden. Sort of the same path that we predict knitting traveled in.

I don’t know what I’m looking at here.

  • Maybe this isn’t real and this places just happen to preserve textiles better
  • Longstanding trade or migration routes between North Africa, the Middle East, and Eastern Europe?
  • Culture of innovation in fiber?
  • Maybe fiber is more abundant in these areas, and thus there was more affordance for experimenting. (See below.)

It might be a coincidence. But it’s an odd coincidence, if so.

Why did it take so long for someone to invent knitting?

This is the question I set out to answer in the initial post, but then it turned into a whole thing and I don’t think I ever actually answered my question. Very, very speculatively: I think knitting is just so complicated that it took thousands of years, and an environment rich in fiber innovation, for someone to invent and make use of the series of steps that is knitting.

Take this next argument with a saltshaker, but: my intuitions back this up. I have a good visual imagination. I can sort of “get” how a slip knot works. I get sewing. I understand weaving, I can boil it down in my mind to its constituents.

There are birds that do a form of sewing and a form of weaving. I don’t want to imply that if an animal can figure it out, it’s clearly obvious – I imagine I’d have a lot of trouble walking if I were thrown into the body of a centipede, and chimpanzees can drastically outperform humans on certain cognitive tasks – but I think, again, it’s evidence that it’s a simpler task in some sense.

Same with sprang. It’s not a process I’m familiar with, but watching Sally Pointer do it on a very primitive loom, I can see understand it and could probably do it now. Naalbinding – well, it’s knots, and given a needle and knowing how to make a knot, I think it’s pretty straightforward to tie a bunch of knots on top of each other to make fabric out of it.

But I’ve been knitting for quite a while now and have finished many projects, and I still can’t say I totally get how knitting works. I know there’s a series of interconnected loops, but how exactly they don’t fall apart? How the starting string turns into the final project? It’s not in my head. I only know the steps.

I think that if you erased my memory and handed me some simple tools, especially a loom, I could figure out how to make cloth by weaving. I think there’s also a good chance I could figure out sprang, and naalbinding. But I think that if you handed me knitting needles and string – even if you told me I was trying to get fabric made from a bunch of loops that are looped into each other – I’m not sure I would get to knitting.

(I do feel like I might have a shot at figuring out crochet, though, which is supposedly younger than any of these anyway, so maybe this whole line of thinking means nothing.)

Idle hands as the mother of invention?

Why do we innovate? Is necessity the mother of invention?

This whole story suggests not – or at least, that’s not the whole story. We have the first knit structures in belts (already existed in other forms) and decorative silver wire (strictly ornamental.) We have knit socks from Egypt, not a place known for demanding warm foot protection. What gives?

Elizabeth Wayland Barber says this isn’t just knitting – she points to the spinning jenny and the power loom, both innovations in yarn production in general, that were invented recently by men despite thousands of previous years of women producing yarn. In Women’s Work: The First 20,000 Years, she writes:

“Women of all but the top social and economic classes were so busy just trying to get through what had to be done each day that they didn’t have excess time or materials to experiment with new ways of doing things.”

This speculates a kind of different mechanism of invention – sure, you need a reason to come up with or at least follow up on a discovery, but you also need the space to play. 90% of everything is crap, you need to be really sure that you can throw away (or unravel, or afford the time to re-make) 900 crappy garments before you hit upon the sock.

Bill Bryson, in the introduction to his book At Home, writes about the phenomenon of clergy in the UK in 1700s and 1800s. To become an ordained minister, one needed a university degree, but not in any particular subject, and little ecclesiastical training. Duties were light; most ministers read a sermon out of a prepared book once a week and that was about it. They were paid in tithes from local landowners. Bryson writes:

“Though no one intended it, the effect was to create a class of well-educated, wealthy people who had immense amounts of time on their hands. In conesquence many of them began, quite spontaneously, to do remarkable things. Never in history have a group of people engaged in a broader range of creditable activities for which they were not in any sense actually employed.”

He describes some of the great amount of intellectual work that came out of this class, including not only the aforementioned power loom, but also: scientific descriptions of dinosaurs, the first Icelandic dictionary, Jack Russel terriers, submarines aerial photography, the study of archaeology, Malthusian traps, the telescope that discovered Uranus, werewolf novels, and – courtesy of the original Thomas Bayes – Bayes’ theorem.

I offhandedly posited a random per-person effect in the previous post – each individual has a chance of inventing knitting, so eventually someone will figure it out. There’s no way this can be the whole story. A person in a culture that doesn’t make clothes mostly out of thread, like the traditional Inuit (thread is used to sew clothes, but the clothes are very often sewn out of animal skin rather than woven fabric) seems really unlikely to invent knitting. They wouldn’t have lots of thread about to mess around with. So you need the people to have a degree of familiarity with the materials. You need some spare resources. Some kind of cultural lenience for doing something nonstandard.

…But is that the whole story? The Incan Empire was enormous, with 12,000,000 citizens at its height. They didn’t have a written language. They had the quipu system for recording numbers with knotted string, but they didn’t have a written language. (Their neighbors, the Mayans, did.) Easter Island, between its colonization by humans in 1000 CE and its worse colonization by Europeans in 1700 CE, had a maximum population of maybe 12,000. It’s one of the most remote islands in the world. In isolation from other societies, they did develop a written language, in fact Polynesia’s only native written language.

Color photo of a worn wooden tablet engraved with intricate Rongorongo characters.
One of ~26 surviving pieces of Rongorongo, the undeciphered written script of Easter Island. This is Text R, the “Small Washington tablet”. Photo from the Smithsonian Institution. (Image rotated to correspond with the correct reading order, as a courtesy to any Rongorongo readers in my audience. Also, if there are any Rongorongo readers in my audience, please reach out. How are you doing that?!)
A black and white photo of the same tablet. The lines of characters are labelled (e.g. Line 1, Line 2) and the  symbols are easier to see. Some look like stylized humans, animals, and plants.
The same tablet with the symbols slightly clearer. Image found on kohaumoto.org, a very cool Rongorongo resource.

I don’t know what to do with that.

Still. My rough model is:

A businessy chart labelled "Will a specific group make a specific innovation?" There are three groups of factors feeding into each other. First is Person Factors, with a picture of a person in a power wheelchair: Consists of [number of people] times [degree of familiarity with art]. Spare resources (material, time). And cultural support for innovation. Second is Discovery Factors, with a picture of a microscope: Consists of how hard the idea "is to have", benefits from discovery, and [technology required] - [existing technology]. ("Existing technology" in blue because that's technically a person factor.) Third is Special Sauce, with a picture of a wizard. Consists of: Survivorship Bias and The Easter Island Factor (???)

The concept of this chart amused me way too much not to put it in here. Sorry.

(“Survivorship bias” meaning: I think it’s safe to say that if your culture never developed (or lost) the art of sewing, the culture might well have died off. Manipulating thread and cloth is just so useful! Same with hunting, or fishing for a small island culture, etc.)

…What do you mean Loopholes has articles about the history of the autoharp?! My Renaissance man aspirations! Help!


Delightful: A collection of 1900’s forgeries of the Paracas textile. They’re crocheted rather than naalbound.

1 (Uh, usually. You can finger weave with just a stick or two to anchor some yarn to but it wasn’t widespread, possibly because it’s hard to make the cloth very wide.)

2 I had this whole thing ready to go about how a knit belt was ridiculous because a knit tube isn’t actually very stretchy “vertically” (or “warpwise”), and most of its stretch is “horizontal” (or “weftwise”). But then I grabbed a knit tube (fingerless glove) in my environment and measured it at rest and stretched, and it stretched about as far both ways. So I’m forced to consider that a knit belt might be reasonable thing to make for its stretchiness. Empiricism: try it yourself!

3 Fun fact: Plant-based fibers (cotton, linen, etc) are mostly made of carbohydrates. Animal-based fibers (silk, wool, alpaca, etc) and leather are mostly made of protein. Fens are wetlands that are alkaline and bogs are acidic. Carbohydrates decay in acidic bogs but are well-preserved in alkaline fens. Proteins dissolve in alkaline environments fens but last in acidic bogs. So it’s easier to find preserved animal material or fibers in bogs and preserved plant material or fibers in fens.


Cross-posted to LessWrong.

There’s no such thing as a tree (phylogenetically)

So you’ve heard about how fish aren’t a monophyletic group? You’ve heard about carcinization, the process by which ocean arthropods convergently evolve into crabs? You say you get it now? Sit down. Sit down. Shut up. Listen. You don’t know nothing yet.

“Trees” are not a coherent phylogenetic category. On the evolutionary tree of plants, trees are regularly interspersed with things that are absolutely, 100% not trees. This means that, for instance, either:

  • The common ancestor of a maple and a mulberry tree was not a tree.
  • The common ancestor of a stinging nettle and a strawberry plant was a tree.
  • And this is true for most trees or non-trees that you can think of.

I thought I had a pretty good guess at this, but the situation is far worse than I could have imagined.

CLICK TO EXPAND. Partial phylogenetic tree of various plants. TL;DR: Tan is definitely, 100% trees. Yellow is tree-like. Green is 100% not a tree. Sourced mostly from Wikipedia.

I learned after making this chart that tree ferns exist (h/t seebs), which I think just emphasizes my point further. Also, h/t kithpendragon on LW for suggestions on increasing accessibility of the graph.

Why do trees keep happening?

First, what is a tree? It’s a big long-lived self-supporting plant with leaves and wood.

Also of interest to us are the non-tree “woody plants”, like lianas (thick woody vines) and shrubs. They’re not trees, but at least to me, it’s relatively apparent how a tree could evolve into a shrub, or vice-versa. The confusing part is a tree evolving into a dandelion. (Or vice-versa.)

Wood, as you may have guessed by now, is also not a clear phyletic category. But it’s a reasonable category – a lignin-dense structure, usually that grows from the exterior and that forms a pretty readily identifiable material when separated from the tree. (…Okay, not the most explainable, but you know wood? You know when you hold something in your hand, and it’s made of wood, and you can tell that? Yeah, that thing.)

All plants have lignin and cellulose as structural elements – wood is plant matter that is dense with both of these.

Botanists don’t seem to think it only could have gone one way – for instance, the common ancestor of flowering plants is theorized to have been woody. But we also have pretty clear evidence of recent evolution of woodiness – say, a new plant arrives on a relatively barren island, and some of the offspring of that plant becomes treelike. Of plants native to the Canary Islands, wood independently evolved at least 38 times!

One relevant factor is that all woody plants do, in a sense, begin life as herbaceous plants – by and large, a tree sprout shares a lot of properties with any herbaceous plant. Indeed, botanists call this kind of fleshy, soft growth from the center that elongates a plant “primary growth”, and the later growth from towards the outside which causes a plant to thicken is “secondary growth.” In a woody plant, secondary growth also means growing wood and bark – but other plants sometimes do secondary growth as well, like potatoes in their roots.

This paper addresses the question. I don’t understand a lot of the closely genetic details, but my impression of its thesis is that: Analysis of convergently-evolved woody plants show that the genes for secondary woody growth are similar to primary growth in plants that don’t do any secondary growth – even in unrelated plants. And woody growth is an adaption of secondary growth. To abstract a little more, there is a common and useful structure in herbaceous plants that, when slightly tweaked, “dendronizes” them into woody plants.

Dendronization – Evolving into a tree-like morphology. (In the style of “carcinization“.) From ‘dendro‘, the ancient Greek root for tree.

Can this be tested? Yep – knock out a couple of genes that control flower development and change the light levels to mimic summer, and researchers found that Arabidopsis rock cress, a distinctly herbaceous plant used as a model organism – grows a woody stem never otherwise seen in the species.

The tree-like woody stem (e) and morphology (f, left) of the gene-altered Aridopsis, compared to its distinctly non-tree-like normal form (f, right.) Images from Melzer, Siegbert, et al. “Flowering-time genes modulate meristem determinacy and growth form in Arabidopsis thaliana.” Nature genetics 40.12 (2008): 1489-1492.

So not only can wood develop relatively easily in an herbal plant, it can come from messing with some of the genes that regulate annual behavior – an herby plant’s usual lifecycle of reproducing in warm weather, dying off in cool weather. So that gets us two properties of trees at once: woodiness, and being long-lived. It’s still a far cry from turning a plant into a tree, but also, it’s really not that far.

To look at it another way, as Andrew T. Groover put it:

“Obviously, in the search for which genes make a tree versus a herbaceous plant, it would be folly to look for genes present in poplar and absent in Arabidopsis. More likely, tree forms reflect differences in expression of a similar suite of genes to those found in herbaceous relatives.”

So: There are no unique “tree” genes. It’s just a different expression of genes that plants already use. Analogously, you can make a cake with flour, sugar, eggs, sugar, butter, and vanilla. You can also make frosting with sugar, butter, and vanilla – a subset of the ingredients you already have, but in different ratios and use.

But again, the reverse also happens – a tree needs to do both primary and secondary growth, so it’s relatively easy for a tree lineage to drop the “secondary” growth stage and remain an herb for its whole lifespan, thus “poaizating.” As stated above, it’s hypothesized that the earliest angiosperms were woody, some of which would have lost that in become the most familiar herbaceous plants today. There are also some plants like cassytha and mistletoe, herbaceous plants from tree-heavy lineages, who are both parasitic plants that grow on a host tree. Knowing absolutely nothing about the evolution of these lineages, I think it’s reasonable to speculate that they each came from a tree-like ancestor but poaized to become parasites. (Evolution is very fond of parasites.)

Poaization: Evolving into an herbaceous morphology. From ‘poai‘, ancient Greek term from Theophrastus defining herbaceous plants (“Theophrastus on Herbals and Herbal Remedies”).

(I apologize to anyone I’ve ever complained to about jargon proliferation in rationalist-diaspora blog posts.)

The trend of staying in an earlier stage of development is also called neotenizing. Axolotls are an example in animals – they resemble the juvenile stages of the closely-related tiger salamander. Did you know very rarely, or when exposed to hormone-affecting substances, axolotls “grow up” into something that looks a lot like a tiger salamander? Not unlike the gene-altered Arabidopsis.

A normal axolotl (left) vs. a spontaneously-metamorphosed “adult” axolotl (right.)

[Photo of normal axolotl from By th1098 – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=30918973. Photo of metamorphosed axolotl from deleted reddit user, via this thread: https://www.reddit.com/r/Eyebleach/comments/etg7i6/this_is_itzi_he_is_a_morphed_axolotl_no_thats_not/ ]

Does this mean anything?

A friend asked why I was so interested in this finding about trees evolving convergently. To me, it’s that a tree is such a familiar, everyday thing. You know birds? Imagine if actually there were amphibian birds and mammal birds and insect birds flying all around, and they all looked pretty much the same – feathers, beaks, little claw feet, the lot. You had to be a real bird expert to be able to tell an insect bird from a mammal bird. Also, most people don’t know that there isn’t just one kind of “bird”. That’s what’s going on with trees.


I was also interested in culinary applications of this knowledge. You know people who get all excited about “don’t you know a tomato is a fruit?” or “a blueberry isn’t really a berry?” I was one once, it’s okay. Listen, forget all of that.

There is a kind of botanical definition of a fruit and a berry, talking about which parts of common plant anatomy and reproduction the structure in question is derived from, but they’re definitely not related to the culinary or common understandings. (An apple, arguably the most central fruit of all to many people, is not truly a botanical fruit either).

Let me be very clear here – mostly, this is not what biologists like to say. When we say a bird is a dinosaur, we mean that a bird and a T. rex share a common ancestor that had recognizably dinosaur-ish properties, and that we can generally point to some of those properties in the bird as well – feathers, bone structure, whatever. You can analogize this to similar statements you may have heard – “a whale is a mammal”, “a spider is not an insect”, “a hyena is a feline”…

But this is not what’s happening with fruit. Most “fruits” or “berries” are not descended from a common “fruit” or “berry” ancestor. Citrus fruits are all derived from a common fruit, and so are apples and pears, and plums and apricots – but an apple and an orange, or a fig and a peach, do not share a fruit ancestor.

Instead of trying to get uppity about this, may I recommend the following:

  • Acknowledge that all of our categories are weird and a little arbitrary
  • Look wistfully of pictures of Welwitschia
  • Send a fruit basket to your local botanist/plant evolutionary biologist for putting up with this, or become one yourself
While natural selection is commonly thought to simply be an ongoing process with no “goals” or “end points”, most scientists believe that life peaked at Welwitschia.

[Photo from By Sara&Joachim on Flickr – Flickr, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=6342924 ]

Some more interesting findings:

  • A mulberry (left) is not related to a blackberry (right). They just… both did that.
[ Mulberry photo by Cwambier – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=63402150. Blackberry photo by By Ragesoss – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=4496657. ]
  • Avocado and cinnamon are from fairly closely-related tree species.
  • It’s possible that the last common ancestor between an apple and a peach was not even a tree.
  • Of special interest to my Pacific Northwest readers, the Seattle neighborhood of Magnolia is misnamed after the local madrona tree, which Europeans confused with the (similar-looking) magnolia. In reality, these two species are only very distantly related. (You can find them both on the chart to see exactly how far apart they are.)
  • None of [cactuses, aloe vera, jade plants, snake plants, and the succulent I grew up knowing as “hens and chicks”] are related to each other.
  • Rubus is the genus that contains raspberries, blackberries, dewberries, salmonberries… that kind of thing. (Remember, a genus is the category just above a species – which is kind of a made-up distinction, but suffice to say, this is a closely-related groups of plants.) Some of its members have 14 chromosomes. Some of its members have 98 chromosomes.
  • Seriously, I’m going to hand $20 in cash to the next plant taxonomy expert I meet in person. God knows bacteriologists and zoologists don’t have to deal with this.

And I have one more unanswered question. There doesn’t seem to be a strong tend of plants evolving into grasses, despite the fact that grasses are quite successful and seem kind of like the most anatomically simple plant there could be – root, big leaf, little flower, you’re good to go. But most grass-like plants are in the same group. Why don’t more plants evolve towards the “grass” strategy?


Let’s get personal for a moment. One of my philosophical takeaways from this project is, of course, “convergent evolution is a hell of a drug.” A second is something like “taxonomy is not automatically a great category for regular usage.” Phylogenetics are absolutely fascinating, and I do wish people understood them better, and probably “there’s no such thing as a fish” is a good meme to have around because most people do not realize that they’re genetically closer to a tuna than a tuna is to a shark – and “no such thing as a fish” invites that inquiry.

(You can, at least, say that a tree is a strategy. Wood is a strategy. Fruit is a strategy. A fish is also a strategy.)

At the same time, I have this vision in my mind of a clever person who takes this meandering essay of mine and goes around saying “did you know there’s no such thing as wood?” And they’d be kind of right.

But at the same time, insisting that “wood” is not a useful or comprehensible category would be the most fascinatingly obnoxious rhetorical move. Just the pinnacle of choosing the interestingly abstract over the practical whole. A perfect instance of missing the forest for – uh, the forest for …

… Forget it.


Related:

Timeless Slate Star Codex / Astral Codex Ten piece: The categories were made for man, not man for the categories.

Towards the end of writing this piece, I found that actual botanist Dan Ridley-Ellis made a tweet thread about this topic in 2019. See that for more like this from someone who knows what they’re talking about.

For more outraged plant content, I really enjoy both Botany Shitposts (tumblr) and Crime Pays But Botany Doesn’t (youtube.)

[Crossposted to Lesswrong.]

A point of clarification on infohazard terminology

TL;DR: “Infohazard” means any kind of information that could be harmful in some fashion. Let’s use “cognitohazard” to describe information that could specifically harm the person who knows it.

Some people in my circle like to talk about the idea of information hazards or infohazards, which are dangerous information. This isn’t a fictional concept – Nick Bostrom characterizes a number of different types of infohazards in his 2011 paper that introduces the term (PDF available here). Lots of kinds of information can be dangerous or harmful in some fashion – detailed instructions for making a nuclear bomb. A signal or hint that a person is a member of a marginalized group. An extremist ideology. A spoiler for your favorite TV show. (Listen, an infohazard is a kind of hazard, not a measure of intensity. A papercut is still a kind of injury!)

I’ve been in places where “infohazard” is used in the Bostromian sense casually – to talk about, say, dual-use research of concern in the biological sciences, and describe the specific dangers that might come from publishing procedures of results.

I’ve also been in more esoteric conversations where people use the word “infohazard” to talk about a specific kind of Bostromian information hazard: information that may harm the person who knows it. This is a stranger concept, but there are still lots of apparent examples – a catchy earworm. “You just lost the game.” More seriously, an easy method of committing suicide for a suicidal person. A prototypical fictional example is the “basilisk” fractal from David Langford’s 1988 short story BLIT, which kills you if you see it.

This is a subset of the original definition because it is harmful information, but it’s expected to harm the person who knows it in particular. For instance, detailed schematics for a nuclear weapon aren’t really expected to bring harm to a potential weaponeer – the danger is that the weaponeer will use them to harm others. But fully internalizing the information that Amazon will deliver you a 5-pound bag of Swedish Fish whenever you want is specifically a danger to you. (…Me.)

This disparate use of terms is confusing. I think Bostrom and his intellectual kith get the broader definition of “infohazard”, since they coined the word and are actually using it professionally.*

I propose we call the second thing – information that harms the knower – a cognitohazard.

Pictured: Instantiation of a cognitohazard. Something something red herrings.

This term is shamelessly borrowed from the SCP Foundation, which uses it in a similar way in fiction. I figure the usage can’t make the concept sound any more weird and sci-fi than it already does.

(Cognitohazards don’t have to be hazardous to everybody. Someone who hates Swedish Fish is not going to spend all their money buying bags of Swedish Fish off of Amazon and diving into them like Scrooge McDuck. For someone who loves Swedish Fish – well, no comment. I’d call this “a potential cognitohazard” if you were to yell it into a crowd with unknown opinions on Swedish Fish.)

Anyways, hope that clears things up.


* For a published track record of this usage, see: an academic paper from Future of Humanity Institute and Center for Health Security staff, another piece by Bostrom, an opinion piece by esteemed synthetic biologist Kevin Esvelt, a piece on synthetic biology by FHI researcher Cassidy Nelson, a piece by Phil Torres.

(UPDATE: The version I initially published proposed the term “memetic hazard” rather than “cognitohazard.” LessWrong commentor MichaelA kindly pointed out that “memetic hazard” already meant a different concept that better suited that name. Since I had only just put out the post, I decided to quickly backpeddle and switch out the word for another one with similar provinence. I hate having to do this, but it sure beats not doing it. Thank you, MichaelA!)

Algorithmic horror

There’s a particular emotion that I felt a lot over 2019, much more than any other year. I expect it to continue in future years. That emotion is what I’m calling “algorithmic horror”.

It’s confusion at a targeted ad on Twitter for a product you were just talking about.

It’s seeing a “recommended friend” on facebook, but who you haven’t seen in years and don’t have any contact with.

It’s skimming a tumblr post with a banal take and not really registering it, and then realizing it was written by a bot.

It’s those baffling Lovecraftian kid’s videos on Youtube.

It’s a disturbing image from ArtBreeder, dreamed up by a computer.

PIctured: a normal dog. Don’t worry about it. It’s fine.

I see this as an outgrowth of ancient, evolution-calibrated emotions. Back in the day, our lives depended on quick recognition of the signs of other animals – predator, prey, or other humans. There’s a moment I remember from animal tracking where disparate details of the environment suddenly align – the way the twigs are snapped and the impressions in the dirt suddenly resolve themselves into the idea of deer.

In the built environment of today, we know that most objects are built by human hands. Still, it can be surprising to walk in an apparently remote natural environment and find a trail or structure, evidence that someone has come this way before you. Skeptic author Michael Shermer calls this “agenticity”, the human bias towards seeing intention and agency in all sorts of patterns.

Or, as argumate puts it:

the trouble is humans are literally structured to find “a wizard did it” a more plausible explanation than things just happening by accident for no reason.

I see algorithmic horror as an extension of this, built objects masquerading as human-generated. I looked up oarfish merchandise on Amazon, to see if I could buy anything commemorating the world’s best fish, and found this hat.

If you look at the seller’s listing, you can confirm that all of their products are like this.

It’s a bit incredible. Presumably, no #oarfish hat has ever existed. No human ever created an #oarfish hat or decided that somebody would like to buy them. Possibly, nobody had ever even viewed the #oarfish hat listing until I stumbled onto it.

In a sense this is just an outgrowth of custom-printing services that have been around for decades, but… it’s weird, right? It’s a weird ecosystem.

But human involvement can be even worse. All of those weird Youtube kid’s videos were made by real people. Many of them are acted out by real people. But they were certainly done to market to children, on Youtube, and named and designed in order to fit into a thoughtless algorithm. You can’t tell me that an adult human was ever like “you know what a good artistic work would be?” and then made “Learn Colors Game with Disney Frozen, PJ Masks Paw Patrol Mystery – Spin the Wheel Get Slimed” without financial incentives created by an automated program.

If you want a picture of the future, imagine a faceless adult hand pulling a pony figurine out of a plastic egg, while taking a break between cutting glittered balls of playdoh in half, silent while a prerecorded version of Skip To My Lou plays in the background, forever.

Everything discussed so far is relatively inconsequential, foreshadowing rather than the shade itself. But algorithms are still affecting the world and harming people now – setting racially-biased bail in Kentucky, potentially-biased hiring decisions, facilitating companies recording what goes on in your home, even career Youtubers forced to scramble and pivot as their videos become more or less recommended.

To be clear, algorithms also do a great deal of good – increasing convenience and efficiency, decreasing resource consumption, probably saving lives a well. I don’t mean to write this to say “algorithms are all-around bad”, or even “algorithms are net bad”. Sometimes it’s solely with good intentions, but it still sounds incredibly creepy, like how Facebook is judging how suicidal all of its users are.

This is an elegant instance of Goodhart’s Law. Goodhart’s Law says that if you want a certain result and issue rewards for a metric related to the result, you’ll start getting optimization for the metric rather than the result.

The Youtube algorithm – and other algorithms across the web – are created to connect people with content (in order to sell to advertisers, etc.) Producers of content want to attract as much attention as possible to sell their products.

But the algorithms just aren’t good enough to perfectly offer people the online content they want. They’re simplified, relies on keywords, can be duped, etcetera. And everyone knows that potential customers aren’t going to trawl through the hundreds of pages of online content themselves for the best “novelty mug” or “kid’s video”. So a lot of content exists, and decisions are made, that fulfill the algorithm’s criteria rather than our own.

In a sense, when we look at the semi-coherent output of algorithms, we’re looking into the uncanny valley between the algorithm’s values and our own.

We live in strange times. Good luck to us all for 2020.


Aside from its numerous forays into real life, algorithmic horror has also been at the center of some stellar fiction. See:

UrracaWatch: A biodefense twitter mystery

This is is an internet mystery that is now mostly defunct. I’m going to write it down here anyways in case someone can tell me what was going on, or will be able to in the future.

UPDATE, 2020-05-17: UrracaWatch briefly went back up briefly in December 2020. It is down again, but this time I was able to capture a version on The Internet Archive. Here’s a link to that archived version.

In July 2019, a few people on Professional Biodefense Twitter noted that they were getting follows or likes from some very idiosyncratic twitter accounts. (Some more screenshots are available at that link.)

The posts on these accounts had a few things in common:

  • Links to apparently random web pages related to chemical weapons, biological weapons, or health care
  • These links are routed through “UrracaWatch.com” before leading to the final link
  • No commentary

The accounts also have a few other properties:

  • Real-sounding usernames and display names
  • No other posts on the account
  • I tried reverse-image-searching a couple account-related images and didn’t see anything. James Diggans on Twitter tried doing the same for account profile photos (of people) and also didn’t find results.

The choice of websites linked were very strange. They looked like someone searched for various chem/bioweapon/health-related words, then chose random websites from the first page or two of search results. Definition pages, scholarly articles, products (but all from very different websites.)

Tweets from one of the UrracaWatch Twitter accounts.
Tweets from one of the UrracaWatch Twitter accounts.

Some example UrracaWatch bot account handles: DeterNoBoom, fumeFume31, ChemOrRiley, ChristoBerk, BioWeaP0n, ScienceGina, chempower2112, ChemistWannabe. All of these looked exactly like the Mark Davis @ChemPower2112. (Sidenote: I really wish I had archived these more properly. If you find an internet mystery you might want to investigate later, save all the pages right away. You’re just going to have to take me on faith. Alternatively, if you have more screenshots of any of these websites or accounts, please send them to me.)

if this actually is weird psy-op propaganda, I think “Holly England @VaxyourKid” represents a rare example of pro-vaccination English propaganda, as opposed to the more common anti-vaccination propaganda. Also, not to side with the weird maybe-psy-op, but vaccinate your kids.

And here are some facts about the (now-defunct) website UrracaWatch:

  • The website had a very simple format – a list of links (the same kinds of bio/chem/health links that end up on the twitter pages), and a text bar at the top for entering new links.
  • (I tried using it to submit a link and didn’t see an immediate new entry on the page.)
  • There were no advertisements, information on the creators, other pages, etc.
  • According to the page source code and the tracker- and cross-request-detecting Firefox app Ghostery, there were no trackers, counters, advertisers, or any other complexity on the site.
  • According to the ICANN registry, the domain UrracaWatch.com was registered 9-17-2018 via GoDaddy. The domain has now expired as of 9-17-2019, probably as part of a 12-month domain purchase.
  • Urraca is a spanish word for magpie, which was a messenger of death in the view of the Anasazi people. (The messenger of death part probably isn’t relevant here, but they mention the word as part of a real-life spooky historical site in The Black Tapes Podcast, and this added an unavoidable sinister flavor.) (Urraca is also a woman’s name.)

(I don’t have a screenshot of the website. A March 2019 Internet Archive snapshot is blank, but I’m not sure if that’s an error or was an accurate reflection at the time.)

As far as I can tell, nobody aside from these twitterbots have ever linked to or used UrracaWatch.com for anything at all, anywhere on the web.

By and large, the twitterbots – and I think they must be bots – have been banned. The website is down.

But come on, what on earth was UrracaWatch?

Some possibilities:

  • Advertisement scheme
  • Test of some kind of Twitter-scraping link- or ad-bot that happened to focus on the biodefense community on twitter for some reason
  • Weird psy-op

I’m dubious of the advertisement angle. I’ve been enjoying a lot of the podcast Reply All lately, especially their episodes on weird scams. There’s an interesting point made in my favorite episode (The Case of the Phantom Caller) in dissecting a weird communication, which I asked myself here – I just can’t see how anyone is making money off of this. Again, there were occasional product links, but they were to all different websites that looked like legitimate stores, and I don’t think I ever saw multiple links to the same store.

That leaves “bot test” and “weird psy-op”, or something I haven’t thought of yet. If it was propaganda, it wasn’t very good. If you have a guess about what was going on, let me know.

Naked mole-rats: A case study in biological weirdness

Epistemic status: Speculative, just having fun. This piece isn’t well-cited, but I can pull up sources as needed – nothing about mole-rats is my original research. A lot of this piece is based on Wikipedia.

When I wrote about “weirdness” in the past, I called marine invertebrates, archaea viruses, and Florida Man stories “predictably weird”. This means I wasn’t really surprised to learn any new wild fact about them. But there’s a sense in which marine invertebrates both are and aren’t weird. I want to try operationalizing “weirdness” as “amount of unpredictability or diversity present in a class” (or “in an individual”) compared to other members of its group.

So in terms of “animals your hear about” – well, you know the tigers, the mice, the bees, the tuna fish, the songbirds, whatever else comes up in your life. But “deep sea invertebrates” seems to include a variety of improbable creatures – a betentacled neon sphere covered in spikes, a six-foot long disconcertingly smooth and flesh-colored worm, bisexual squids, etc. Hey! Weird! That’s weird.

But looking at a phylogenetic tree, we see really quickly that “invertebrates” represent almost the entire animal tree of life.

 

Invertebrates represent most of the strategies that animals have attempted on earth, and certainly most of the animals on earth. Vertebrates are the odd ones out.

But you know which animals are profoundly weird, no matter which way you look at it? Naked mole rats. Naked mole-rats have like a dozen properties that are not just unusual, not just strange, but absolutely batshit. Let’s review.

1. They don’t age

What? Well, for most animals, their chance of dying goes up over time. You can look at a population and find something like this:

MoleRats1.jpg

Mole-rats, they have the same chance of dying at any age. Their graph looks like this:

20190519_133452.jpg

They’re joined, more or less, by a few species of jellyfish, flatworms, turtles, lobsters, and at least one fish.

They’re hugely long-lived compared to other rodents, seen in zoos at 30+ years old compared to the couple brief years that rats get.

2. They don’t get cancer

Cancer generally seems to be the curse of multicellular beings, but naked mole-rats are an exception. A couple mole-rats have developed cancer-like growths in captivity, but no wild mole-rat has ever been found with cancer.

3. They don’t feel some forms of pain

Mole-rats don’t respond to acid or capsaicin, which is, as far as I know, unique among mammals.

4. They’re eusocial

Definitely unique among mammals. Like bees, ants, and termites, naked mole-rats have a single breeding “queen” in each colony, and other “worker” individuals exist in castes that perform specific tasks. In an evolutionary sense, this means that the “unit of selection” for the species is the queen, not any individual – the queen’s genes are the ones that get passed down.

They’re also a fascinating case study of an animal whose existence was deduced before it was proven. Nobody knew about eusocial mammals for a long time. In 1974, entomologist Richard Alexander, who studied eusocial insects, wrote down a set of environmental characteristics he thought would be required for a eusocial mammal to evolve. Around 1981 and the next decade, naked mole-rats – a perfect match for his predictions – were found to be eusocial.

5. They don’t have fur

Obviously. But aside from genetic flukes or domesticated breeds, that puts them in a small unlikely group with only some marine mammals, rhinoceros, hippos, elephants, one species of boar, and… us.

nakedmoleratintube.gif

You and this entity have so much in common.

6. They’re able to survive ridiculously low oxygen levels

It uses very little oxygen during normal metabolism, much less than comparable-sized rodents, and it can survive for hours at 5% oxygen (a quarter of normal levels.)

7. Their front teeth move back and forth like chopsticks

I’m not actually sure how common this is in rodents. But it really weirded me out.

8. They have no regular sleep schedule

This is weird, because jellyfish have sleep schedules. But not mole-rats!

9. They’re cold-blooded

They have basically no ability to adjust their body temperature internally, perhaps because their caves tend to be rather constant temperatures. If they need to be a different temperature, they can huddle together, or move to a higher or lower level in their burrow.


All of this makes me think that mole-rats must have some underlying unusual properties which lead to all this – a “weirdness generator”, if you will.

A lot of these are connected to the fact that mole rats spend almost their entire lives underground. There are lots of burrowing animals, but “almost their entire” is pretty unusual – they don’t surface to find food, water, or (usually) mates. (I think they might only surface when digging tunnels and when a colony splits.) So this might explain (8) – no need for a sleep schedule when you can’t see the sun. It also seems to explain (5) and (9), because thermoregulation is unnecessary when they’re living in an environment that’s a pretty constant temperature.

It probably explains (6) because lower burrow levels might have very little oxygen most of the time, although there’s some debate about this – their burrows might actually be pretty well ventilated.

And Richard Alexander’s 12 postulates that would lead to a eusocial vertebrate – plus some other knowledge of eusociality – suggests that this underground climate, when combined with the available lifestyle and food source of a molerat, should lead to eusociality.

It might also be the source of (2) and (3) – people have theorized that higher CO2 or lower oxygen levels in burrows might reduce DNA damage or related to neuron function or something. (This would also explain why only mole-rats in captivity have had tumors, since they’re kept at atmospheric oxygen levels.) These still seem to be up in the air, though. Mole-rats clearly have a variety of fascinating biochemical tricks that are still being understood.

So there’s at least one “weirdness generator” that leads to all of these strange mole-rat properties. There might be more.

I’m pretty sure it’s not the chopstick teeth (7), at least – but as with many predictions one could make about mole rats, I could easily be wrong.

NakedMolerat.gif

To watch some naked mole-rats going about their lives, check out the Pacific Science Center’s mole-rat live camera. It’s really fun, if a writhing mass of playful otters that are also uncooked hotdogs sounds fun to you.

2019_05_19_14:15:48_Selection.png

Spaghetti Towers

Here’s a pattern I’d like to be able to talk about. It might be known under a certain name somewhere, but if it is, I don’t know it. I call it a Spaghetti Tower. It shows up in large complex systems that are built haphazardly.

Someone or somethdesidesigning builds the first Part A.

20181220_204411.jpg

Later, someone wants to put a second Part B on top of Part A, either out of convenience (a common function, just somewhere to put it) or as a refinement to Part A.

20181220_204450.jpg

Now, suppose you want to tweak Part A. If you do that, you might break Part B, since it interacts with bits of Part A. So you might instead build Part C on top of the previous ones.

20181220_204759

And by the time your system looks like this, it’s much harder to tell what changes you can make to an earlier part without crashing some component, so you’re basically relegated to throwing another part on top of the pile.

bkajfeakfje

I call these spaghetti towers for two reasons: One, because they tend to quickly take on circuitous knotty tangled structures, like what programmers call “spaghetti code”. (Part of the problem with spaghetti code is that it can lead to spaghetti towers.)

Especially since they’re usually interwoven in multiple dimensions, and thus look more like this:

20181220_205553

“Can you just straighten out the yellow one without touching any of the others? Thanks.”

Second, because shortsightedness in the design process is a crucial part of spaghetti machines. In order to design a spaghetti system, you throw spaghetti against a wall and see if it sticks. Then, when you want to add another part, you throw more spaghetti until it sticks to that spaghetti. And later, you throw more spaghetti. So it goes. And if you decide that you want to tweak the bottom layer to make it a little more useful – which you might want to do because, say, it was built out of spaghetti – without damaging the next layers of gummy partially-dried spaghetti, well then, good luck.

Note that all systems have load-bearing, structural pieces. This does not make them spaghetti towers. The distinction about spaghetti towers is that they have a lot of shoddily-built structural components that are completely unintentional. A bridge has major load-bearing components – they’re pretty obvious, strong, elegant, and efficiently support the rest of the structure. A spaghetti tower is more like this.

SpaghettiFix

The motto of the spaghetti tower is “Sure, it works fine, as long as you never run lukewarm water through it and turn off the washing machine during thunderstorms.” || Image from the always-delightful r/DiWHY.

Where do spaghetti towers appear?

  • Basically all of biology works like this. Absolutely all of evolution is made by throwing spaghetti against walls and seeing what sticks. (More accurately, throwing nucleic acid against harsh reality and seeing what successfully makes more nucleic acid.) We are 3.5 billion years of hacks in fragile trench coats.
    • Scott Star Codex describes the phenomenon in neurotransmitters, but it’s true for all of molecular biology:

You know those stories about clueless old people who get to their Gmail account by typing “Google” into Bing, clicking on Google in the Bing search results, typing “Gmail” into Google, and then clicking on Gmail in the Google search results?

I am reading about serotonin transmission now, and everything in the human brain works on this principle. If your brain needs to downregulate a neurotransmitter, it’ll start by upregulating a completely different neurotransmitter, which upregulates the first neurotransmitter, which hits autoreceptors that downregulate the first neurotransmitter, which then cancel the upregulation, and eventually the neurotransmitter gets downregulated.

Meanwhile, my patients are all like “How come this drug that was supposed to cure my depression is giving me vision problems?” and at least on some level the answer is “how come when Bing is down your grandfather can’t access Gmail?

  • My programming friends tell me that spaghetti towers are near-universal in the codebases of large companies. Where it would theoretically be nice if every function was neatly ordered, but actually, the thing you’re working on has three different dependencies, two of which are unmaintained and were abandoned when the guy who built them went to work at Google, and you can never be 100% certain that your code tweak won’t crash the site.
  • I think this also explains some of why bureaucracies look and act the way they do, and are so hard to change.

I think there are probably a lot of examples of spaghetti towers, and they probably have big ramifications for things like, for instance, what systems evolution can and can’t build.

I want to do a much deeper and more thoughtful analysis about what exactly the implications here are, but this has been kicking around my brain for long enough and all I want to do is get the concept out there.

Does this feel like a meaningful concept? Where do you see spaghetti towers?

Crossposted to LessWrong.


Happy solstice from Eukaryote Writes Blog. Here’s a playlist for you (or listen to Raymond Arnold’s Secular Solstice music.)

The funnel of human experience

[EDIT: Previous version of this post had some errors. Thanks for jeff8765 for pinpointing the error and esrogs in the comments for bringing it to my attention as well. This has been fixed. Also, I wrote FHI when I meant FLI.]

The graph of the human population over time is also a map of human experience. Think of each year as being “amount of human lived experience that happened this year.” On the left, we see the approximate dawn of the modern human species in 50,000 BC. On the right, the population exploding in the present day.

2018_09_19_21:53:07_Selection

It turns out that if you add up all these years, 50% of human experience has happened after 1309 AD. 15% of all experience has been experienced by people who are alive right now.

I call this “the funnel of human experience” – the fact that because of a tiny initial population blossoming out into a huge modern population, more of human experience has happened recently than time would suggest.

50,000 years is a long time, but 8,000,000,000 people is a lot of people.

20181009_155712_Film3

Early human experience: casts of the skulls of the earliest modern humans found in various  continents. Display at the Smithsonian Museum of National History.

 


If you want to expand on this, you can start doing some Fermi estimates. We as a species have spent…

  • 1,650,000,000,000 total “human experience years”
    • See my dataset linked at the bottom of this post.
  • 7,450,000,000 human years spent having sex
    • Humans spend 0.45% of our lives having sex. 0.45% * [total human experience years] = 7E9 years
  • 52,000,000,000 years spent drinking coffee
    • 500 billion cups of coffee drunk this year x 15 minutes to drink each cup x 100 years* = 5E10 years
      • *Coffee consumption has likely been much higher recently than historically, but it does have a long history. I’m estimating about a hundred years of current consumption for total global consumption ever.
  • 1,000,000,000 years spent in labor
    • 110,000,000,000 billion humans ever x ½ women x 12 pregnancies* x 15 hours apiece = 1.1E9 years
      • *Infant mortality, yo. H/t Ellie and Shaw for this estimate.
  • 417,000,000 years spent worshipping the Greek gods
    • 1000 years* x 10,000,000 people** x 365 days a year x 1 hour a day*** = 4E8 years

      • *Some googling suggested that people worshipped the Greek/Roman Gods in some capacity from roughly 500 BC to 500 AD.
      • **There were about 10 million people in Ancient Greece. This probably tapered a lot to the beginning and end of that period, but on the other hand worship must have been more widespread than just Greece, and there have been pagans and Hellenists worshiping since then.
      • ***Worshiping generally took about an hour a day on average, figuring in priests and festivals? Sure.
  • 30,000,000 years spent watching Netflix
    • 14,000,000 hours/day* x 365 days x 5 years** = 2.92E7 years
      • * Netflix users watched an average of 14 million hours of content a day in 2017.
      • **Netflix the company has been around for 10 years, but has gotten bigger recently.
  • 50,000 years spent drinking coffee in Waffle House

So humanity in aggregate has spent about ten times as long worshiping the Greek gods as we’ve spent watching Netflix.

We’ve spent another ten times as long having sex as we’ve spent worshipping the Greek gods.

And we’ve spent ten times as long drinking coffee as we’ve spent having sex.


I’m not sure what this implies. Here are a few things I gathered from this:

1) I used to be annoyed at my high school world history classes for spending so much time on medieval history and after, when there was, you know, all of history before that too. Obviously there are other reasons for this – Eurocentrism, the fact that more recent events have clearer ramifications today – but to some degree this is in fact accurately reflecting how much history there is.

On the other hand, I spent a bunch of time in school learning about the Greek Gods, a tiny chunk of time learning about labor, and virtually no time learning about coffee. This is another disappointing trend in the way history is approached and taught, focusing on a series of major events rather than the day-to-day life of people.

2) The Funnel gets more stark the closer you move to the present day. Look at science. FLI reports that 90% of PhDs that have ever lived are alive right now. That means most of all scientific thought is happening in parallel rather than sequentially.

3) You can’t use the Funnel to reason about everything. For instance, you can’t use it to reason about extended evolutionary processes. Evolution is necessarily cumulative. It works on the unit of generations, not individuals. (You can make some inferences about evolution – for instance, the likelihood of any particular mutation occurring increases when there are more individuals to mutate – but evolution still has the same number of generations to work with, no matter how large each generation is.)

4) This made me think about the phrase “living memory”. The world’s oldest living person is Kane Tanaka, who was born in 1903. 28% of the entirety of human experience has happened since her birth. As mentioned above, 15% has been directly experienced by living people. We have writing and communication and memory, so we have a flawed channel by which to inherit information, and experiences in a sense. But humans as a species can only directly remember as far back as 1903.


Here’s my dataset. The population data comes from the Population Review Bureau and their report on how many humans ever lived, and from Our World In Data. Let me know if you get anything from this.

Fun fact: The average living human is 30.4 years old.

Wait But Why’s explanation of the real revolution of artificial intelligence is relevant and worth reading. See also Luke Muehlhauser’s conclusions on the Industrial Revolution: Part One and Part Two.


Crossposted to LessWrong.

Caring less

Why don’t more attempts at persuasion take the form “care less about ABC”, rather than the popular “care more about XYZ”?

People, in general, can only do so much caring. We can only spend so many resources and so much effort and brainpower on the things we value.

For instance: Avery spends 40 hours a week working at a homeless shelter, and a substantial amount of their free time researching issues and lobbying for better policy for the homeless. Avery learns about existential risk and decides that it’s much more important than homelessness, say 100 times more, and is able to pivot their career into working on existential risk instead.

But nobody expects Avery to work 100 times harder on existential risk, or feel 100 times more strongly about it. That’s ridiculous. There literally isn’t enough time in the day, and thinking like that is a good way to burn out like a meteor in orbit.

Avery also doesn’t stop caring about homelessness – not at all. But as a result of caring so much more about existential risk, they do have to care less about homelessness (in any meaningful or practical sense) as a result.

And this is totally normal. It would be kind of nice if we could put a meaningful amount of energy in proportion to everything we care about, but we only have so much emotional and physical energy and time, and caring about different things over time is a natural part of learning and life.

When we talk about what we should care about, where we should focus more of our time and energy, we really only have one kludgey tool to do so: “care more”. Society, people, and companies are constantly telling you to “care more” about certain things. Your brain will take some of these, and through a complicated process, reallocate your priorities such that each gets an amount of attention that fits into your actual stores of time and emotional and physical energy.

But since what we value and how much is often considered, literally, the most important thing on this dismal earth, I want more nuance and more accuracy in this process. Introducing “consider caring less” into the conversation does this. It describes an important mental action and lets you describe what you want more accurately. Caring less already happens in people’s beliefs, it affects the world, so let’s talk about it.

On top of that, the constant chorus of “care more” is also exhausting. It creates a societal backdrop of guilt and anxiety. And some of this is good – the world is filled with problems and it’s important to care about fixing them. But you can’t actually do everything, and establishing the mental affordance to care less about something without disregarding it entirely or feeling like an awful human is better for the ability to prioritize things in accordance with your values.

I’ve been talking loosely about cause areas, but this applies everywhere. A friend describes how in work meetings, the only conversational attitude ever used is this is so important, we need to work hard on that, this part is crucial, let’s put more effort here. Are these employees going to work three times harder because you gave them more things to focus on, and didn’t tell them to focus on anything else less? No.

I suspect that more “care less” messaging would do wonders on creating a life or a society with more yin, more slack, and a more relaxed and sensible attitude towards priorities and values.

It also implies a style of thinking we’re less used to than “finding reasons people should care”, but it’s one that can be done and it reflects actual mental processes that already exist.


Why don’t we see this more?

(Or “why couldn’t we care less”?)

Some suggestions:

  • It’s more incongruous with brains

Brains can create connections easily, but unlike computers, can’t erase them. You can learn a fact by practicing it on notecards or by phone reminders, but can’t un-learn a fact except by disuse. “Care less” is requesting an action from you that’s harder to implement than “care more”.

  • It’s not obvious how to care less about something

This might be a cultural thing, though. Ways to care less about something include: mindfulness, devoting fewer resources towards a thing, allowing yourself to put more time into your other interests, and reconsidering when you’re taking an action based on the thing and deciding if you want to do something else.

  • It sounds preachy

I suspect people feel that if you assert “care more about this”, you’re just sharing your point of view, and information that might be useful, and working in good faith. But if you say “care less about that”, it feels like you know their values and their point of view, and you’re declaring that you understand their priorities better than them and that their priorities are wrong.

Actually, I think either “care more” or “care less” can have both of those nuances. At its best, “maybe care less” is a helpful and friendly suggestion made in your best interests. There are plenty of times I could use advice along the lines of “care less”.

At its worst, “care more” means “I know your values better than you, I know you’re not taking them seriously, and I’m so sure I’m right that I feel entitled to take up your valuable time explaining why.”

  • It invokes defensiveness

If you treat the things you care about as cherished parts of your identity, you may react badly to people telling you to care less about them. If so, “care less about something you already care about” has a negative emotional effect compared to “care more about something you don’t already care about”.

(On the other hand, being told you don’t have to worry about something can be a relief. It might depend on if you see the thought in question as a treasured gift or as a burden. I’m not sure.)

  • It’s less memetically fit

“Care more about X” sounds more exciting and engaging than “care less about Y”, so people are more likely to remember and spread it.

  • It’s dangerous

Maybe? Maybe by telling people to “care less” you’ll remove their motivations and drive them into an unrelenting apathy. But if you stop caring about something major, you can care more about other things.

Also, if this happens and harms people, it already happens when you tell people to “care more” and thus radically change their feelings and values. Unfortunately, a process exists by which other people can insert potentially-hostile memes into your brain without permission, and it’s called communication. “Care less” doesn’t seem obviously more risky than the reverse.

  • We already do (sometimes)

Buddhism has a lot to say on relinquishing attachment and desires.

Self-help-type things often say “don’t worry about what other people think of you” or “peer pressure isn’t worth your attention”, although they rarely come with strategies.

Criticism implicitly says “care less about X”, though this is rarely explicitly turned into suggestions for the reader.

Effective Altruism is an example of this when it criticizes ineffective cause areas or charities. This image implicitly says “…So maybe care more about animals on farms and less about pets,” which seems like a correct message for them to be sending.

Image from Animal Charity Evaluators.


Anyway, maybe “care less” messaging doesn’t work well for some reason, but existing messaging is homogeneous in this way and I’d love to see people at least try for some variation.


Photo taken at the 2016 Bay Area Secular Solstice. During an intermission, sticky notes and markers were passed around, and we were given the prompt: “If someone you knew and loved was suffering in a really bad situation, and was on the verge of giving up, what would you tell them?” Most of them were beautiful messages of encouragement and hope and support, but this was my favorite.


Crossposted on LessWrong.

This blog has a Patreon. If you like what you’ve read, consider giving it your support so I can make more of it.

Fictional body language

Here’s something weird.

A common piece of advice for fiction writers is to “show, not tell” a character’s emotions. It’s not bad advice. It means that when you want to convey an emotional impression, describe the physical characteristics instead.

The usual result of applying this advice is that instead of a page of “Alice said nervously” or “Bob was confused”, you get a vivid page of action: “Alice stuttered, rubbing at her temples with a shaking hand,” or “Bob blinked and arched his eyebrows.”

The second thing is certainly better than the first thing. But a strange thing happens when the emotional valence isn’t easily replaced with an easily-described bit of body language. Characters in these books whose authors follow this advice seem to be doing a lot more yawning, trembling, sighing, emotional swallowing, groaning, and nodding than I or anyone I talk to does in real life.

It gets even stranger. These characters bat their lashes, or grip things so tightly their knuckles go white, or grit their teeth, or their mouths go dry. I variously either don’t think I do those, or wouldn’t notice someone else doing it.

Blushing is a very good example, for me. Because I read books, I knew enough that I could describe a character blushing in my own writing, and the circumstances in which it would happen, and what it looked like. I don’t think I’d actually noticed anyone blush in real life. A couple months after this first occurred to me, a friend happened to point out that another friend was blushing, and I was like, oh, alright, that is what’s going on, I guess this is a thing after all. But I wouldn’t have known before.

To me, it was like a piece of fictional body language we’ve all implicitly agreed represents “the thing your body does when you’re embarrassed or flattered or lovestruck.” I know there’s a particular feeling there, which I could attach to the foreign physical motion, and let the blushing description conjure it up. It didn’t seem any weirder than a book having elves.

(Brienne has written about how writing fiction, and reading about writing fiction, has helped her get better at interpreting emotions from physical cues. They certainly are often real physical cues – I just think the points where this breaks down are interesting.)

Online

There’s another case where humans are innovatively trying to solve the problem of representing feelings in a written medium, which is casual messaging. It’s a constantly evolving blend of your best descriptive words, verbs, emoticons, emojis, and now stickers and gifs and whatever else your platform supports. Let’s draw your attention to the humble emoticon, a marvel of written language. A handful of typographic characters represent a human face – something millions of years of evolution have fine-tuned our brains to interpret precisely.

(In some cases, these are pretty accurate: :) and ^_^ represent more similar things than :) and ;), even though ^_^ doesn’t even have the classic turned-up mouth of representation smiles. Body language: it works!)

:)

:|

:<

Now let’s consider this familiar face:

:P

And think of the context in which it’s normally found. If someone was talking to you in person and told a joke, or made a sarcastic comment, and then stuck their tongue out, you’d be puzzled! Especially if they kept doing it! Despite being a clear representation of a human face, that expression only makes sense in a written medium.

I understand why something like :P needs to exist: If someone makes a joke at you in meatspace, how do you tell it’s a joke? Tone of voice, small facial expressions, the way they look at you, perhaps? All of those things are hard to convey in character form. A stuck-out tongue isn’t, and we know what it means.

The ;) and :D emojis translate to meatspace a little better, maybe. Still, what’s the last time someone winked slyly at you in person?

You certainly can communicate complex things by using your words [CITATION NEEDED], but especially when in casual conversations, it’s nice to have expressive shortcuts. I wrote a bit ago:

Facebook Messenger’s addition of choosing chat colors and customizing the default emoji has, to me, made a weirdly big difference to what it feels like to use them. I think (at least with online messaging platforms I’ve tried before) it’s unique in letting you customize the environment you interact with another person (or a group of people) in.

In meatspace, you might often talk with someone in the same place – a bedroom, a college dining hall – and that interaction takes on the flavor of that place.

Even if not, in meatspace, you have an experience in common, which is the surrounding environment. It sets that interaction apart from all of the other ones. Taking a walk or going to a coffee shop to talk to someone feels different from sitting down in your shared living room, or from meeting them at your office.

You also have a lot of specific qualia of interacting with a person – a deep comfort, a slight tension, the exact sense of how they respond to eye contact or listen to you – all of which are either lost or replaced with cruder variations in the low-bandwidth context of text channels.

And Messenger doesn’t do much, but it adds a little bit of flavor to your interaction with someone besides the literal string of unicode characters they send you. Like, we’re miles apart and I may not currently be able to hear your voice or appreciate you in person, but instead, we can share the color red and send each other a picture of a camel in three different sizes, which is a step in that direction.

(Other emoticons sometimes take on their own valences: The game master in an online RPG I played in had a habit of typing only “ : ) ” in response when you asked him a juicy question, which quickly filled players with a sense of excitement and foreboding. I’ve tried using it since then in other platforms, before realizing that doesn’t actually convey that to literally anyone else. Similarly, users of certain websites may have a strong reaction to the typographic smiley “uwu”.)

Reasoning from fictional examples

In something that could arguably be called a study, I grabbed three books and chose some arbitrary pages in them to look at how character’s emotions are represented, particularly around dialogue.

Lirael by Garth Nix:

133: Lirael “shivers” as she reads a book about a monster. She “stops reading, nervously swallows, and reads the last line again”, and “breaths a long sigh of relief.”

428: She “nods dumbly” in response to another character, and stares at an unfamiliar figure.

259: A character smiles when reading a letter from a friend.

624: Two characters “exchange glances of concern”, one “speaks quickly”.

Most of these are pretty reasonable. I think the first one feels overdone to me, but then again, she’s really agitated when she’s reading the book, so maybe that’s reasonable? Nonetheless, flipping through, I think that this is Garth Nix’s main strategy. The characters might speak “honestly” or “nervously” or “with deliberation” as well, but when Nix really wants you to know how someone’s feeling, he’ll show you how they act.

American Gods by Neil Gaiman:

First page I flipped to didn’t have any.

364: A character “smiles”, “makes a moue”, “smiles again”, “tips her head to one side”. Shadow (the main character) “feels himself beginning to blush.”

175: A character “scowls fleetingly.” A different character “sighs” and his tone changes.

The last page also didn’t have any.

Gaiman does more laying out a character’s thoughts: Shadow imagines how a moment came to happen, or it’s his interpretation that gives flavor – “[Another character] looked very old as he said this, and fragile.”

Earth by David Brin:

First two pages I flipped to didn’t have dialogue.

428: Characters “wave nonchalantly”, “pause”, “shrug”, “shrug” again, “fold his arms, looking quite relaxed”, speak with “an ingratiating smile”, and “continue with a smile”.

207: Characters “nod” and one ‘plants a hand on another’s shoulder”.

168: “Shivers coursed his back. Logan wondered if a microbe might feel this way, looking with sudden awe into a truly giant soul.” One’s “face grows ashen”, another “blinks.” Amusingly, “the engineer shrugged, an expressive gesture.” Expressive of what?

Brin spends a lot of time living in characters’ heads, describing their thoughts. This gives him time to build his detailed sci-fi world, and also gives you enough of a picture of characters that it’s easy to imagine their reactions later on.

How to use this

I don’t think this is necessarily a problem in need of a solution, but fiction is trying to represent the way real people might act. Even of the premise of your novel starts with “there’s magic”, it probably doesn’t segue into “there’s magic and also humans are 50% more physically expressive, and they are always blushing.” (…Maybe the blushing thing is just me.) There’s something appealing about being able to represent body language accurately.

The quick analysis in the section above suggests at least three ways writers express how a fictional character is feeling to a reader. I don’t mean to imply that any is objectively better than the other, although the third one is my favorite.

1) Just describe how they feel. “Alice was nervous”, “Bob said happily.”

This gives the reader information. How was Alice feeling? Clearly, Alice was nervous. It doesn’t convey nervousness, though. Saying the word “nervous” does not generally make someone nervous – it takes some mental effort to translate that into nervous actions or thoughts.

2) Describe their action. A character’s sighing, their chin stuck out, their unblinking eye contact, their gulping. Sheets like these exist to help.

I suspect these work by two ways:

  1. You can imagine yourself doing the action, and then what mental state might have caused it. Especially if it’s the main character, and you’re spending time in their head anyway. It might also be “Wow, Lirael is shivering in fear, and I have to be really scared before I shiver, so she must be very frightened,” though I imagine that making this inference is asking a lot of a reader.
  2. You can visualize a character doing it, in your mental map of the scene, and imagine what you’d think if you saw someone doing it.

Either way, the author is using visualization to get you to recreate being there yourself. This is where I’m claiming some weird things like fictional body language develop.

3) Use metaphor, or describe a character’s thoughts, in such a way that the reader generates the feeling in their own head.

Gaiman in particular does this quite skillfully in American Gods.

[Listening to another character talk on and on, and then pause:] Shadow hadn’t said anything, and hadn’t planned to say anything, but he felt it was required of him, so said, “Well, weren’t they?”

[While in various degrees of psychological turmoil:] He did not trust his voice not to betray him, so he simply shook his head.

[And:] He wished he could come back with something smart and sharp, but Town was already back at the Humvee, and climbing up into the car; and Shadow still couldn’t think of anything clever to say”

Also metaphors, or images:

Chicago happened slowly, like a migraine.

There must have been thirty, maybe even forty people in that hall, and now they were every one of them looking intently at their playing cards, or their feet, or their fingernails, and pretending as hard as they could not to be listening.

By doing the mental exercises written out in the text, by letting your mind run over them and provoke some images in your brain, the author can get your brain to conjure the feeling by using some unrelated description. How cool is that! It doesn’t actually matter whether, in the narrative, it’s occurred to Shadow that Chicago is happening like a migraine. Your brain is doing the important thing on its own.


(Possible Facebook messenger equivalents: 1) “I’m sad” or “That’s funny!” 2) Emoticons / emotive stickers, *hug* or other actions 3) Gifs, more abstract stickers.)


You might be able to use this to derive some wisdom for writing fiction. I like metaphors, for one.

If you want to do body language more accurately, you can also pay attention to exactly how an emotion feels to you, where it sits in your body or your mind – meditation might be helpful – and try and describe that.

Either might be problematic because people experience emotions differently – the exact way you feel an emotion might be completely inscrutable to someone else. Maybe you don’t usually feel emotions in your body, or you don’t easily name them in your head. Maybe your body language isn’t standard. Emotions tend to derive from similar parts of the nervous system, though, so you probably won’t be totally off.

(It’d also be cool if the reader than learned about a new way to feel emotions from your fiction, but the failure mode I’m thinking of is ‘reader has no idea what you were trying to convey.’)

You could also try people-watching (or watching TV or a movie), and examining how you know someone is feeling a certain way. I bet some of these are subtle – slight shifts in posture and expression – but you might get some inspiration. (Unless you had to learn this by memorizing cues from fiction, in which case this exercise is less likely to be useful.)


Overall, given all the shades of nuance that go into emotional valence, and the different ways people feel or demonstrate emotions, I think it’s hardly surprising that we’ve come up with linguistic shorthands, even in places that are trying to be representational.


[Header image is images from the EmojiOne 5.0 update assembled by the honestly fantastic Emojipedia Blog.]