Category Archives: here’s something weird

There’s no such thing as a tree (phylogenetically)

So you’ve heard about how fish aren’t a monophyletic group? You’ve heard about carcinization, the process by which ocean arthropods convergently evolve into crabs? You say you get it now? Sit down. Sit down. Shut up. Listen. You don’t know nothing yet.

“Trees” are not a coherent phylogenetic category. On the evolutionary tree of plants, trees are regularly interspersed with things that are absolutely, 100% not trees. This means that, for instance, either:

  • The common ancestor of a maple and a mulberry tree was not a tree.
  • The common ancestor of a stinging nettle and a strawberry plant was a tree.
  • And this is true for most trees or non-trees that you can think of.

I thought I had a pretty good guess at this, but the situation is far worse than I could have imagined.

CLICK TO EXPAND. Partial phylogenetic tree of various plants. TL;DR: Tan is definitely, 100% trees. Yellow is tree-like. Green is 100% not a tree. Sourced mostly from Wikipedia.

I learned after making this chart that tree ferns exist (h/t seebs), which I think just emphasizes my point further. Also, h/t kithpendragon on LW for suggestions on increasing accessibility of the graph.

Why do trees keep happening?

First, what is a tree? It’s a big long-lived self-supporting plant with leaves and wood.

Also of interest to us are the non-tree “woody plants”, like lianas (thick woody vines) and shrubs. They’re not trees, but at least to me, it’s relatively apparent how a tree could evolve into a shrub, or vice-versa. The confusing part is a tree evolving into a dandelion. (Or vice-versa.)

Wood, as you may have guessed by now, is also not a clear phyletic category. But it’s a reasonable category – a lignin-dense structure, usually that grows from the exterior and that forms a pretty readily identifiable material when separated from the tree. (…Okay, not the most explainable, but you know wood? You know when you hold something in your hand, and it’s made of wood, and you can tell that? Yeah, that thing.)

All plants have lignin and cellulose as structural elements – wood is plant matter that is dense with both of these.

Botanists don’t seem to think it only could have gone one way – for instance, the common ancestor of flowering plants is theorized to have been woody. But we also have pretty clear evidence of recent evolution of woodiness – say, a new plant arrives on a relatively barren island, and some of the offspring of that plant becomes treelike. Of plants native to the Canary Islands, wood independently evolved at least 38 times!

One relevant factor is that all woody plants do, in a sense, begin life as herbaceous plants – by and large, a tree sprout shares a lot of properties with any herbaceous plant. Indeed, botanists call this kind of fleshy, soft growth from the center that elongates a plant “primary growth”, and the later growth from towards the outside which causes a plant to thicken is “secondary growth.” In a woody plant, secondary growth also means growing wood and bark – but other plants sometimes do secondary growth as well, like potatoes in their roots.

This paper addresses the question. I don’t understand a lot of the closely genetic details, but my impression of its thesis is that: Analysis of convergently-evolved woody plants show that the genes for secondary woody growth are similar to primary growth in plants that don’t do any secondary growth – even in unrelated plants. And woody growth is an adaption of secondary growth. To abstract a little more, there is a common and useful structure in herbaceous plants that, when slightly tweaked, “dendronizes” them into woody plants.

Dendronization – Evolving into a tree-like morphology. (In the style of “carcinization“.) From ‘dendro‘, the ancient Greek root for tree.

Can this be tested? Yep – knock out a couple of genes that control flower development and change the light levels to mimic summer, and researchers found that Arabidopsis rock cress, a distinctly herbaceous plant used as a model organism – grows a woody stem never otherwise seen in the species.

The tree-like woody stem (e) and morphology (f, left) of the gene-altered Aridopsis, compared to its distinctly non-tree-like normal form (f, right.) Images from Melzer, Siegbert, et al. “Flowering-time genes modulate meristem determinacy and growth form in Arabidopsis thaliana.” Nature genetics 40.12 (2008): 1489-1492.

So not only can wood develop relatively easily in an herbal plant, it can come from messing with some of the genes that regulate annual behavior – an herby plant’s usual lifecycle of reproducing in warm weather, dying off in cool weather. So that gets us two properties of trees at once: woodiness, and being long-lived. It’s still a far cry from turning a plant into a tree, but also, it’s really not that far.

To look at it another way, as Andrew T. Groover put it:

“Obviously, in the search for which genes make a tree versus a herbaceous plant, it would be folly to look for genes present in poplar and absent in Arabidopsis. More likely, tree forms reflect differences in expression of a similar suite of genes to those found in herbaceous relatives.”

So: There are no unique “tree” genes. It’s just a different expression of genes that plants already use. Analogously, you can make a cake with flour, sugar, eggs, sugar, butter, and vanilla. You can also make frosting with sugar, butter, and vanilla – a subset of the ingredients you already have, but in different ratios and use.

But again, the reverse also happens – a tree needs to do both primary and secondary growth, so it’s relatively easy for a tree lineage to drop the “secondary” growth stage and remain an herb for its whole lifespan, thus “poaizating.” As stated above, it’s hypothesized that the earliest angiosperms were woody, some of which would have lost that in become the most familiar herbaceous plants today. There are also some plants like cassytha and mistletoe, herbaceous plants from tree-heavy lineages, who are both parasitic plants that grow on a host tree. Knowing absolutely nothing about the evolution of these lineages, I think it’s reasonable to speculate that they each came from a tree-like ancestor but poaized to become parasites. (Evolution is very fond of parasites.)

Poaization: Evolving into an herbaceous morphology. From ‘poai‘, ancient Greek term from Theophrastus defining herbaceous plants (“Theophrastus on Herbals and Herbal Remedies”).

(I apologize to anyone I’ve ever complained to about jargon proliferation in rationalist-diaspora blog posts.)

The trend of staying in an earlier stage of development is also called neotenizing. Axolotls are an example in animals – they resemble the juvenile stages of the closely-related tiger salamander. Did you know very rarely, or when exposed to hormone-affecting substances, axolotls “grow up” into something that looks a lot like a tiger salamander? Not unlike the gene-altered Arabidopsis.

A normal axolotl (left) vs. a spontaneously-metamorphosed “adult” axolotl (right.)

[Photo of normal axolotl from By th1098 – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=30918973. Photo of metamorphosed axolotl from deleted reddit user, via this thread: https://www.reddit.com/r/Eyebleach/comments/etg7i6/this_is_itzi_he_is_a_morphed_axolotl_no_thats_not/ ]

Does this mean anything?

A friend asked why I was so interested in this finding about trees evolving convergently. To me, it’s that a tree is such a familiar, everyday thing. You know birds? Imagine if actually there were amphibian birds and mammal birds and insect birds flying all around, and they all looked pretty much the same – feathers, beaks, little claw feet, the lot. You had to be a real bird expert to be able to tell an insect bird from a mammal bird. Also, most people don’t know that there isn’t just one kind of “bird”. That’s what’s going on with trees.


I was also interested in culinary applications of this knowledge. You know people who get all excited about “don’t you know a tomato is a fruit?” or “a blueberry isn’t really a berry?” I was one once, it’s okay. Listen, forget all of that.

There is a kind of botanical definition of a fruit and a berry, talking about which parts of common plant anatomy and reproduction the structure in question is derived from, but they’re definitely not related to the culinary or common understandings. (An apple, arguably the most central fruit of all to many people, is not truly a botanical fruit either).

Let me be very clear here – mostly, this is not what biologists like to say. When we say a bird is a dinosaur, we mean that a bird and a T. rex share a common ancestor that had recognizably dinosaur-ish properties, and that we can generally point to some of those properties in the bird as well – feathers, bone structure, whatever. You can analogize this to similar statements you may have heard – “a whale is a mammal”, “a spider is not an insect”, “a hyena is a feline”…

But this is not what’s happening with fruit. Most “fruits” or “berries” are not descended from a common “fruit” or “berry” ancestor. Citrus fruits are all derived from a common fruit, and so are apples and pears, and plums and apricots – but an apple and an orange, or a fig and a peach, do not share a fruit ancestor.

Instead of trying to get uppity about this, may I recommend the following:

  • Acknowledge that all of our categories are weird and a little arbitrary
  • Look wistfully of pictures of Welwitschia
  • Send a fruit basket to your local botanist/plant evolutionary biologist for putting up with this, or become one yourself
While natural selection is commonly thought to simply be an ongoing process with no “goals” or “end points”, most scientists believe that life peaked at Welwitschia.

[Photo from By Sara&Joachim on Flickr – Flickr, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=6342924 ]

Some more interesting findings:

  • A mulberry (left) is not related to a blackberry (right). They just… both did that.
[ Mulberry photo by Cwambier – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=63402150. Blackberry photo by By Ragesoss – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=4496657. ]
  • Avocado and cinnamon are from fairly closely-related tree species.
  • It’s possible that the last common ancestor between an apple and a peach was not even a tree.
  • Of special interest to my Pacific Northwest readers, the Seattle neighborhood of Magnolia is misnamed after the local madrona tree, which Europeans confused with the (similar-looking) magnolia. In reality, these two species are only very distantly related. (You can find them both on the chart to see exactly how far apart they are.)
  • None of [cactuses, aloe vera, jade plants, snake plants, and the succulent I grew up knowing as “hens and chicks”] are related to each other.
  • Rubus is the genus that contains raspberries, blackberries, dewberries, salmonberries… that kind of thing. (Remember, a genus is the category just above a species – which is kind of a made-up distinction, but suffice to say, this is a closely-related groups of plants.) Some of its members have 14 chromosomes. Some of its members have 98 chromosomes.
  • Seriously, I’m going to hand $20 in cash to the next plant taxonomy expert I meet in person. God knows bacteriologists and zoologists don’t have to deal with this.

And I have one more unanswered question. There doesn’t seem to be a strong tend of plants evolving into grasses, despite the fact that grasses are quite successful and seem kind of like the most anatomically simple plant there could be – root, big leaf, little flower, you’re good to go. But most grass-like plants are in the same group. Why don’t more plants evolve towards the “grass” strategy?


Let’s get personal for a moment. One of my philosophical takeaways from this project is, of course, “convergent evolution is a hell of a drug.” A second is something like “taxonomy is not automatically a great category for regular usage.” Phylogenetics are absolutely fascinating, and I do wish people understood them better, and probably “there’s no such thing as a fish” is a good meme to have around because most people do not realize that they’re genetically closer to a tuna than a tuna is to a shark – and “no such thing as a fish” invites that inquiry.

(You can, at least, say that a tree is a strategy. Wood is a strategy. Fruit is a strategy. A fish is also a strategy.)

At the same time, I have this vision in my mind of a clever person who takes this meandering essay of mine and goes around saying “did you know there’s no such thing as wood?” And they’d be kind of right.

But at the same time, insisting that “wood” is not a useful or comprehensible category would be the most fascinatingly obnoxious rhetorical move. Just the pinnacle of choosing the interestingly abstract over the practical whole. A perfect instance of missing the forest for – uh, the forest for …

… Forget it.


Related:

Timeless Slate Star Codex / Astral Codex Ten piece: The categories were made for man, not man for the categories.

Towards the end of writing this piece, I found that actual botanist Dan Ridley-Ellis made a tweet thread about this topic in 2019. See that for more like this from someone who knows what they’re talking about.

For more outraged plant content, I really enjoy both Botany Shitposts (tumblr) and Crime Pays But Botany Doesn’t (youtube.)

[Crossposted to Lesswrong.]

A point of clarification on infohazard terminology

TL;DR: “Infohazard” means any kind of information that could be harmful in some fashion. Let’s use “cognitohazard” to describe information that could specifically harm the person who knows it.

Some people in my circle like to talk about the idea of information hazards or infohazards, which are dangerous information. This isn’t a fictional concept – Nick Bostrom characterizes a number of different types of infohazards in his 2011 paper that introduces the term (PDF available here). Lots of kinds of information can be dangerous or harmful in some fashion – detailed instructions for making a nuclear bomb. A signal or hint that a person is a member of a marginalized group. An extremist ideology. A spoiler for your favorite TV show. (Listen, an infohazard is a kind of hazard, not a measure of intensity. A papercut is still a kind of injury!)

I’ve been in places where “infohazard” is used in the Bostromian sense casually – to talk about, say, dual-use research of concern in the biological sciences, and describe the specific dangers that might come from publishing procedures of results.

I’ve also been in more esoteric conversations where people use the word “infohazard” to talk about a specific kind of Bostromian information hazard: information that may harm the person who knows it. This is a stranger concept, but there are still lots of apparent examples – a catchy earworm. “You just lost the game.” More seriously, an easy method of committing suicide for a suicidal person. A prototypical fictional example is the “basilisk” fractal from David Langford’s 1988 short story BLIT, which kills you if you see it.

This is a subset of the original definition because it is harmful information, but it’s expected to harm the person who knows it in particular. For instance, detailed schematics for a nuclear weapon aren’t really expected to bring harm to a potential weaponeer – the danger is that the weaponeer will use them to harm others. But fully internalizing the information that Amazon will deliver you a 5-pound bag of Swedish Fish whenever you want is specifically a danger to you. (…Me.)

This disparate use of terms is confusing. I think Bostrom and his intellectual kith get the broader definition of “infohazard”, since they coined the word and are actually using it professionally.*

I propose we call the second thing – information that harms the knower – a cognitohazard.

Pictured: Instantiation of a cognitohazard. Something something red herrings.

This term is shamelessly borrowed from the SCP Foundation, which uses it in a similar way in fiction. I figure the usage can’t make the concept sound any more weird and sci-fi than it already does.

(Cognitohazards don’t have to be hazardous to everybody. Someone who hates Swedish Fish is not going to spend all their money buying bags of Swedish Fish off of Amazon and diving into them like Scrooge McDuck. For someone who loves Swedish Fish – well, no comment. I’d call this “a potential cognitohazard” if you were to yell it into a crowd with unknown opinions on Swedish Fish.)

Anyways, hope that clears things up.


* For a published track record of this usage, see: an academic paper from Future of Humanity Institute and Center for Health Security staff, another piece by Bostrom, an opinion piece by esteemed synthetic biologist Kevin Esvelt, a piece on synthetic biology by FHI researcher Cassidy Nelson, a piece by Phil Torres.

(UPDATE: The version I initially published proposed the term “memetic hazard” rather than “cognitohazard.” LessWrong commentor MichaelA kindly pointed out that “memetic hazard” already meant a different concept that better suited that name. Since I had only just put out the post, I decided to quickly backpeddle and switch out the word for another one with similar provinence. I hate having to do this, but it sure beats not doing it. Thank you, MichaelA!)

Algorithmic horror

There’s a particular emotion that I felt a lot over 2019, much more than any other year. I expect it to continue in future years. That emotion is what I’m calling “algorithmic horror”.

It’s confusion at a targeted ad on Twitter for a product you were just talking about.

It’s seeing a “recommended friend” on facebook, but who you haven’t seen in years and don’t have any contact with.

It’s skimming a tumblr post with a banal take and not really registering it, and then realizing it was written by a bot.

It’s those baffling Lovecraftian kid’s videos on Youtube.

It’s a disturbing image from ArtBreeder, dreamed up by a computer.

PIctured: a normal dog. Don’t worry about it. It’s fine.

I see this as an outgrowth of ancient, evolution-calibrated emotions. Back in the day, our lives depended on quick recognition of the signs of other animals – predator, prey, or other humans. There’s a moment I remember from animal tracking where disparate details of the environment suddenly align – the way the twigs are snapped and the impressions in the dirt suddenly resolve themselves into the idea of deer.

In the built environment of today, we know that most objects are built by human hands. Still, it can be surprising to walk in an apparently remote natural environment and find a trail or structure, evidence that someone has come this way before you. Skeptic author Michael Shermer calls this “agenticity”, the human bias towards seeing intention and agency in all sorts of patterns.

Or, as argumate puts it:

the trouble is humans are literally structured to find “a wizard did it” a more plausible explanation than things just happening by accident for no reason.

I see algorithmic horror as an extension of this, built objects masquerading as human-generated. I looked up oarfish merchandise on Amazon, to see if I could buy anything commemorating the world’s best fish, and found this hat.

If you look at the seller’s listing, you can confirm that all of their products are like this.

It’s a bit incredible. Presumably, no #oarfish hat has ever existed. No human ever created an #oarfish hat or decided that somebody would like to buy them. Possibly, nobody had ever even viewed the #oarfish hat listing until I stumbled onto it.

In a sense this is just an outgrowth of custom-printing services that have been around for decades, but… it’s weird, right? It’s a weird ecosystem.

But human involvement can be even worse. All of those weird Youtube kid’s videos were made by real people. Many of them are acted out by real people. But they were certainly done to market to children, on Youtube, and named and designed in order to fit into a thoughtless algorithm. You can’t tell me that an adult human was ever like “you know what a good artistic work would be?” and then made “Learn Colors Game with Disney Frozen, PJ Masks Paw Patrol Mystery – Spin the Wheel Get Slimed” without financial incentives created by an automated program.

If you want a picture of the future, imagine a faceless adult hand pulling a pony figurine out of a plastic egg, while taking a break between cutting glittered balls of playdoh in half, silent while a prerecorded version of Skip To My Lou plays in the background, forever.

Everything discussed so far is relatively inconsequential, foreshadowing rather than the shade itself. But algorithms are still affecting the world and harming people now – setting racially-biased bail in Kentucky, potentially-biased hiring decisions, facilitating companies recording what goes on in your home, even career Youtubers forced to scramble and pivot as their videos become more or less recommended.

To be clear, algorithms also do a great deal of good – increasing convenience and efficiency, decreasing resource consumption, probably saving lives a well. I don’t mean to write this to say “algorithms are all-around bad”, or even “algorithms are net bad”. Sometimes it’s solely with good intentions, but it still sounds incredibly creepy, like how Facebook is judging how suicidal all of its users are.

This is an elegant instance of Goodhart’s Law. Goodhart’s Law says that if you want a certain result and issue rewards for a metric related to the result, you’ll start getting optimization for the metric rather than the result.

The Youtube algorithm – and other algorithms across the web – are created to connect people with content (in order to sell to advertisers, etc.) Producers of content want to attract as much attention as possible to sell their products.

But the algorithms just aren’t good enough to perfectly offer people the online content they want. They’re simplified, relies on keywords, can be duped, etcetera. And everyone knows that potential customers aren’t going to trawl through the hundreds of pages of online content themselves for the best “novelty mug” or “kid’s video”. So a lot of content exists, and decisions are made, that fulfill the algorithm’s criteria rather than our own.

In a sense, when we look at the semi-coherent output of algorithms, we’re looking into the uncanny valley between the algorithm’s values and our own.

We live in strange times. Good luck to us all for 2020.


Aside from its numerous forays into real life, algorithmic horror has also been at the center of some stellar fiction. See:

UrracaWatch: A biodefense twitter mystery

This is is an internet mystery that is now mostly defunct. I’m going to write it down here anyways in case someone can tell me what was going on, or will be able to in the future.

UPDATE, 2020-05-17: UrracaWatch briefly went back up briefly in December 2020. It is down again, but this time I was able to capture a version on The Internet Archive. Here’s a link to that archived version.

In July 2019, a few people on Professional Biodefense Twitter noted that they were getting follows or likes from some very idiosyncratic twitter accounts. (Some more screenshots are available at that link.)

The posts on these accounts had a few things in common:

  • Links to apparently random web pages related to chemical weapons, biological weapons, or health care
  • These links are routed through “UrracaWatch.com” before leading to the final link
  • No commentary

The accounts also have a few other properties:

  • Real-sounding usernames and display names
  • No other posts on the account
  • I tried reverse-image-searching a couple account-related images and didn’t see anything. James Diggans on Twitter tried doing the same for account profile photos (of people) and also didn’t find results.

The choice of websites linked were very strange. They looked like someone searched for various chem/bioweapon/health-related words, then chose random websites from the first page or two of search results. Definition pages, scholarly articles, products (but all from very different websites.)

Tweets from one of the UrracaWatch Twitter accounts.
Tweets from one of the UrracaWatch Twitter accounts.

Some example UrracaWatch bot account handles: DeterNoBoom, fumeFume31, ChemOrRiley, ChristoBerk, BioWeaP0n, ScienceGina, chempower2112, ChemistWannabe. All of these looked exactly like the Mark Davis @ChemPower2112. (Sidenote: I really wish I had archived these more properly. If you find an internet mystery you might want to investigate later, save all the pages right away. You’re just going to have to take me on faith. Alternatively, if you have more screenshots of any of these websites or accounts, please send them to me.)

if this actually is weird psy-op propaganda, I think “Holly England @VaxyourKid” represents a rare example of pro-vaccination English propaganda, as opposed to the more common anti-vaccination propaganda. Also, not to side with the weird maybe-psy-op, but vaccinate your kids.

And here are some facts about the (now-defunct) website UrracaWatch:

  • The website had a very simple format – a list of links (the same kinds of bio/chem/health links that end up on the twitter pages), and a text bar at the top for entering new links.
  • (I tried using it to submit a link and didn’t see an immediate new entry on the page.)
  • There were no advertisements, information on the creators, other pages, etc.
  • According to the page source code and the tracker- and cross-request-detecting Firefox app Ghostery, there were no trackers, counters, advertisers, or any other complexity on the site.
  • According to the ICANN registry, the domain UrracaWatch.com was registered 9-17-2018 via GoDaddy. The domain has now expired as of 9-17-2019, probably as part of a 12-month domain purchase.
  • Urraca is a spanish word for magpie, which was a messenger of death in the view of the Anasazi people. (The messenger of death part probably isn’t relevant here, but they mention the word as part of a real-life spooky historical site in The Black Tapes Podcast, and this added an unavoidable sinister flavor.) (Urraca is also a woman’s name.)

(I don’t have a screenshot of the website. A March 2019 Internet Archive snapshot is blank, but I’m not sure if that’s an error or was an accurate reflection at the time.)

As far as I can tell, nobody aside from these twitterbots have ever linked to or used UrracaWatch.com for anything at all, anywhere on the web.

By and large, the twitterbots – and I think they must be bots – have been banned. The website is down.

But come on, what on earth was UrracaWatch?

Some possibilities:

  • Advertisement scheme
  • Test of some kind of Twitter-scraping link- or ad-bot that happened to focus on the biodefense community on twitter for some reason
  • Weird psy-op

I’m dubious of the advertisement angle. I’ve been enjoying a lot of the podcast Reply All lately, especially their episodes on weird scams. There’s an interesting point made in my favorite episode (The Case of the Phantom Caller) in dissecting a weird communication, which I asked myself here – I just can’t see how anyone is making money off of this. Again, there were occasional product links, but they were to all different websites that looked like legitimate stores, and I don’t think I ever saw multiple links to the same store.

That leaves “bot test” and “weird psy-op”, or something I haven’t thought of yet. If it was propaganda, it wasn’t very good. If you have a guess about what was going on, let me know.

Naked mole-rats: A case study in biological weirdness

Epistemic status: Speculative, just having fun. This piece isn’t well-cited, but I can pull up sources as needed – nothing about mole-rats is my original research. A lot of this piece is based on Wikipedia.

When I wrote about “weirdness” in the past, I called marine invertebrates, archaea viruses, and Florida Man stories “predictably weird”. This means I wasn’t really surprised to learn any new wild fact about them. But there’s a sense in which marine invertebrates both are and aren’t weird. I want to try operationalizing “weirdness” as “amount of unpredictability or diversity present in a class” (or “in an individual”) compared to other members of its group.

So in terms of “animals your hear about” – well, you know the tigers, the mice, the bees, the tuna fish, the songbirds, whatever else comes up in your life. But “deep sea invertebrates” seems to include a variety of improbable creatures – a betentacled neon sphere covered in spikes, a six-foot long disconcertingly smooth and flesh-colored worm, bisexual squids, etc. Hey! Weird! That’s weird.

But looking at a phylogenetic tree, we see really quickly that “invertebrates” represent almost the entire animal tree of life.

 

Invertebrates represent most of the strategies that animals have attempted on earth, and certainly most of the animals on earth. Vertebrates are the odd ones out.

But you know which animals are profoundly weird, no matter which way you look at it? Naked mole rats. Naked mole-rats have like a dozen properties that are not just unusual, not just strange, but absolutely batshit. Let’s review.

1. They don’t age

What? Well, for most animals, their chance of dying goes up over time. You can look at a population and find something like this:

MoleRats1.jpg

Mole-rats, they have the same chance of dying at any age. Their graph looks like this:

20190519_133452.jpg

They’re joined, more or less, by a few species of jellyfish, flatworms, turtles, lobsters, and at least one fish.

They’re hugely long-lived compared to other rodents, seen in zoos at 30+ years old compared to the couple brief years that rats get.

2. They don’t get cancer

Cancer generally seems to be the curse of multicellular beings, but naked mole-rats are an exception. A couple mole-rats have developed cancer-like growths in captivity, but no wild mole-rat has ever been found with cancer.

3. They don’t feel some forms of pain

Mole-rats don’t respond to acid or capsaicin, which is, as far as I know, unique among mammals.

4. They’re eusocial

Definitely unique among mammals. Like bees, ants, and termites, naked mole-rats have a single breeding “queen” in each colony, and other “worker” individuals exist in castes that perform specific tasks. In an evolutionary sense, this means that the “unit of selection” for the species is the queen, not any individual – the queen’s genes are the ones that get passed down.

They’re also a fascinating case study of an animal whose existence was deduced before it was proven. Nobody knew about eusocial mammals for a long time. In 1974, entomologist Richard Alexander, who studied eusocial insects, wrote down a set of environmental characteristics he thought would be required for a eusocial mammal to evolve. Around 1981 and the next decade, naked mole-rats – a perfect match for his predictions – were found to be eusocial.

5. They don’t have fur

Obviously. But aside from genetic flukes or domesticated breeds, that puts them in a small unlikely group with only some marine mammals, rhinoceros, hippos, elephants, one species of boar, and… us.

nakedmoleratintube.gif

You and this entity have so much in common.

6. They’re able to survive ridiculously low oxygen levels

It uses very little oxygen during normal metabolism, much less than comparable-sized rodents, and it can survive for hours at 5% oxygen (a quarter of normal levels.)

7. Their front teeth move back and forth like chopsticks

I’m not actually sure how common this is in rodents. But it really weirded me out.

8. They have no regular sleep schedule

This is weird, because jellyfish have sleep schedules. But not mole-rats!

9. They’re cold-blooded

They have basically no ability to adjust their body temperature internally, perhaps because their caves tend to be rather constant temperatures. If they need to be a different temperature, they can huddle together, or move to a higher or lower level in their burrow.


All of this makes me think that mole-rats must have some underlying unusual properties which lead to all this – a “weirdness generator”, if you will.

A lot of these are connected to the fact that mole rats spend almost their entire lives underground. There are lots of burrowing animals, but “almost their entire” is pretty unusual – they don’t surface to find food, water, or (usually) mates. (I think they might only surface when digging tunnels and when a colony splits.) So this might explain (8) – no need for a sleep schedule when you can’t see the sun. It also seems to explain (5) and (9), because thermoregulation is unnecessary when they’re living in an environment that’s a pretty constant temperature.

It probably explains (6) because lower burrow levels might have very little oxygen most of the time, although there’s some debate about this – their burrows might actually be pretty well ventilated.

And Richard Alexander’s 12 postulates that would lead to a eusocial vertebrate – plus some other knowledge of eusociality – suggests that this underground climate, when combined with the available lifestyle and food source of a molerat, should lead to eusociality.

It might also be the source of (2) and (3) – people have theorized that higher CO2 or lower oxygen levels in burrows might reduce DNA damage or related to neuron function or something. (This would also explain why only mole-rats in captivity have had tumors, since they’re kept at atmospheric oxygen levels.) These still seem to be up in the air, though. Mole-rats clearly have a variety of fascinating biochemical tricks that are still being understood.

So there’s at least one “weirdness generator” that leads to all of these strange mole-rat properties. There might be more.

I’m pretty sure it’s not the chopstick teeth (7), at least – but as with many predictions one could make about mole rats, I could easily be wrong.

NakedMolerat.gif

To watch some naked mole-rats going about their lives, check out the Pacific Science Center’s mole-rat live camera. It’s really fun, if a writhing mass of playful otters that are also uncooked hotdogs sounds fun to you.

2019_05_19_14:15:48_Selection.png

Spaghetti Towers

Here’s a pattern I’d like to be able to talk about. It might be known under a certain name somewhere, but if it is, I don’t know it. I call it a Spaghetti Tower. It shows up in large complex systems that are built haphazardly.

Someone or somethdesidesigning builds the first Part A.

20181220_204411.jpg

Later, someone wants to put a second Part B on top of Part A, either out of convenience (a common function, just somewhere to put it) or as a refinement to Part A.

20181220_204450.jpg

Now, suppose you want to tweak Part A. If you do that, you might break Part B, since it interacts with bits of Part A. So you might instead build Part C on top of the previous ones.

20181220_204759

And by the time your system looks like this, it’s much harder to tell what changes you can make to an earlier part without crashing some component, so you’re basically relegated to throwing another part on top of the pile.

bkajfeakfje

I call these spaghetti towers for two reasons: One, because they tend to quickly take on circuitous knotty tangled structures, like what programmers call “spaghetti code”. (Part of the problem with spaghetti code is that it can lead to spaghetti towers.)

Especially since they’re usually interwoven in multiple dimensions, and thus look more like this:

20181220_205553

“Can you just straighten out the yellow one without touching any of the others? Thanks.”

Second, because shortsightedness in the design process is a crucial part of spaghetti machines. In order to design a spaghetti system, you throw spaghetti against a wall and see if it sticks. Then, when you want to add another part, you throw more spaghetti until it sticks to that spaghetti. And later, you throw more spaghetti. So it goes. And if you decide that you want to tweak the bottom layer to make it a little more useful – which you might want to do because, say, it was built out of spaghetti – without damaging the next layers of gummy partially-dried spaghetti, well then, good luck.

Note that all systems have load-bearing, structural pieces. This does not make them spaghetti towers. The distinction about spaghetti towers is that they have a lot of shoddily-built structural components that are completely unintentional. A bridge has major load-bearing components – they’re pretty obvious, strong, elegant, and efficiently support the rest of the structure. A spaghetti tower is more like this.

SpaghettiFix

The motto of the spaghetti tower is “Sure, it works fine, as long as you never run lukewarm water through it and turn off the washing machine during thunderstorms.” || Image from the always-delightful r/DiWHY.

Where do spaghetti towers appear?

  • Basically all of biology works like this. Absolutely all of evolution is made by throwing spaghetti against walls and seeing what sticks. (More accurately, throwing nucleic acid against harsh reality and seeing what successfully makes more nucleic acid.) We are 3.5 billion years of hacks in fragile trench coats.
    • Scott Star Codex describes the phenomenon in neurotransmitters, but it’s true for all of molecular biology:

You know those stories about clueless old people who get to their Gmail account by typing “Google” into Bing, clicking on Google in the Bing search results, typing “Gmail” into Google, and then clicking on Gmail in the Google search results?

I am reading about serotonin transmission now, and everything in the human brain works on this principle. If your brain needs to downregulate a neurotransmitter, it’ll start by upregulating a completely different neurotransmitter, which upregulates the first neurotransmitter, which hits autoreceptors that downregulate the first neurotransmitter, which then cancel the upregulation, and eventually the neurotransmitter gets downregulated.

Meanwhile, my patients are all like “How come this drug that was supposed to cure my depression is giving me vision problems?” and at least on some level the answer is “how come when Bing is down your grandfather can’t access Gmail?

  • My programming friends tell me that spaghetti towers are near-universal in the codebases of large companies. Where it would theoretically be nice if every function was neatly ordered, but actually, the thing you’re working on has three different dependencies, two of which are unmaintained and were abandoned when the guy who built them went to work at Google, and you can never be 100% certain that your code tweak won’t crash the site.
  • I think this also explains some of why bureaucracies look and act the way they do, and are so hard to change.

I think there are probably a lot of examples of spaghetti towers, and they probably have big ramifications for things like, for instance, what systems evolution can and can’t build.

I want to do a much deeper and more thoughtful analysis about what exactly the implications here are, but this has been kicking around my brain for long enough and all I want to do is get the concept out there.

Does this feel like a meaningful concept? Where do you see spaghetti towers?

Crossposted to LessWrong.


Happy solstice from Eukaryote Writes Blog. Here’s a playlist for you (or listen to Raymond Arnold’s Secular Solstice music.)

The funnel of human experience

[EDIT: Previous version of this post had some errors. Thanks for jeff8765 for pinpointing the error and esrogs in the comments for bringing it to my attention as well. This has been fixed. Also, I wrote FHI when I meant FLI.]

The graph of the human population over time is also a map of human experience. Think of each year as being “amount of human lived experience that happened this year.” On the left, we see the approximate dawn of the modern human species in 50,000 BC. On the right, the population exploding in the present day.

2018_09_19_21:53:07_Selection

It turns out that if you add up all these years, 50% of human experience has happened after 1309 AD. 15% of all experience has been experienced by people who are alive right now.

I call this “the funnel of human experience” – the fact that because of a tiny initial population blossoming out into a huge modern population, more of human experience has happened recently than time would suggest.

50,000 years is a long time, but 8,000,000,000 people is a lot of people.

20181009_155712_Film3

Early human experience: casts of the skulls of the earliest modern humans found in various  continents. Display at the Smithsonian Museum of National History.

 


If you want to expand on this, you can start doing some Fermi estimates. We as a species have spent…

  • 1,650,000,000,000 total “human experience years”
    • See my dataset linked at the bottom of this post.
  • 7,450,000,000 human years spent having sex
    • Humans spend 0.45% of our lives having sex. 0.45% * [total human experience years] = 7E9 years
  • 52,000,000,000 years spent drinking coffee
    • 500 billion cups of coffee drunk this year x 15 minutes to drink each cup x 100 years* = 5E10 years
      • *Coffee consumption has likely been much higher recently than historically, but it does have a long history. I’m estimating about a hundred years of current consumption for total global consumption ever.
  • 1,000,000,000 years spent in labor
    • 110,000,000,000 billion humans ever x ½ women x 12 pregnancies* x 15 hours apiece = 1.1E9 years
      • *Infant mortality, yo. H/t Ellie and Shaw for this estimate.
  • 417,000,000 years spent worshipping the Greek gods
    • 1000 years* x 10,000,000 people** x 365 days a year x 1 hour a day*** = 4E8 years

      • *Some googling suggested that people worshipped the Greek/Roman Gods in some capacity from roughly 500 BC to 500 AD.
      • **There were about 10 million people in Ancient Greece. This probably tapered a lot to the beginning and end of that period, but on the other hand worship must have been more widespread than just Greece, and there have been pagans and Hellenists worshiping since then.
      • ***Worshiping generally took about an hour a day on average, figuring in priests and festivals? Sure.
  • 30,000,000 years spent watching Netflix
    • 14,000,000 hours/day* x 365 days x 5 years** = 2.92E7 years
      • * Netflix users watched an average of 14 million hours of content a day in 2017.
      • **Netflix the company has been around for 10 years, but has gotten bigger recently.
  • 50,000 years spent drinking coffee in Waffle House

So humanity in aggregate has spent about ten times as long worshiping the Greek gods as we’ve spent watching Netflix.

We’ve spent another ten times as long having sex as we’ve spent worshipping the Greek gods.

And we’ve spent ten times as long drinking coffee as we’ve spent having sex.


I’m not sure what this implies. Here are a few things I gathered from this:

1) I used to be annoyed at my high school world history classes for spending so much time on medieval history and after, when there was, you know, all of history before that too. Obviously there are other reasons for this – Eurocentrism, the fact that more recent events have clearer ramifications today – but to some degree this is in fact accurately reflecting how much history there is.

On the other hand, I spent a bunch of time in school learning about the Greek Gods, a tiny chunk of time learning about labor, and virtually no time learning about coffee. This is another disappointing trend in the way history is approached and taught, focusing on a series of major events rather than the day-to-day life of people.

2) The Funnel gets more stark the closer you move to the present day. Look at science. FLI reports that 90% of PhDs that have ever lived are alive right now. That means most of all scientific thought is happening in parallel rather than sequentially.

3) You can’t use the Funnel to reason about everything. For instance, you can’t use it to reason about extended evolutionary processes. Evolution is necessarily cumulative. It works on the unit of generations, not individuals. (You can make some inferences about evolution – for instance, the likelihood of any particular mutation occurring increases when there are more individuals to mutate – but evolution still has the same number of generations to work with, no matter how large each generation is.)

4) This made me think about the phrase “living memory”. The world’s oldest living person is Kane Tanaka, who was born in 1903. 28% of the entirety of human experience has happened since her birth. As mentioned above, 15% has been directly experienced by living people. We have writing and communication and memory, so we have a flawed channel by which to inherit information, and experiences in a sense. But humans as a species can only directly remember as far back as 1903.


Here’s my dataset. The population data comes from the Population Review Bureau and their report on how many humans ever lived, and from Our World In Data. Let me know if you get anything from this.

Fun fact: The average living human is 30.4 years old.

Wait But Why’s explanation of the real revolution of artificial intelligence is relevant and worth reading. See also Luke Muehlhauser’s conclusions on the Industrial Revolution: Part One and Part Two.


Crossposted to LessWrong.

Caring less

Why don’t more attempts at persuasion take the form “care less about ABC”, rather than the popular “care more about XYZ”?

People, in general, can only do so much caring. We can only spend so many resources and so much effort and brainpower on the things we value.

For instance: Avery spends 40 hours a week working at a homeless shelter, and a substantial amount of their free time researching issues and lobbying for better policy for the homeless. Avery learns about existential risk and decides that it’s much more important than homelessness, say 100 times more, and is able to pivot their career into working on existential risk instead.

But nobody expects Avery to work 100 times harder on existential risk, or feel 100 times more strongly about it. That’s ridiculous. There literally isn’t enough time in the day, and thinking like that is a good way to burn out like a meteor in orbit.

Avery also doesn’t stop caring about homelessness – not at all. But as a result of caring so much more about existential risk, they do have to care less about homelessness (in any meaningful or practical sense) as a result.

And this is totally normal. It would be kind of nice if we could put a meaningful amount of energy in proportion to everything we care about, but we only have so much emotional and physical energy and time, and caring about different things over time is a natural part of learning and life.

When we talk about what we should care about, where we should focus more of our time and energy, we really only have one kludgey tool to do so: “care more”. Society, people, and companies are constantly telling you to “care more” about certain things. Your brain will take some of these, and through a complicated process, reallocate your priorities such that each gets an amount of attention that fits into your actual stores of time and emotional and physical energy.

But since what we value and how much is often considered, literally, the most important thing on this dismal earth, I want more nuance and more accuracy in this process. Introducing “consider caring less” into the conversation does this. It describes an important mental action and lets you describe what you want more accurately. Caring less already happens in people’s beliefs, it affects the world, so let’s talk about it.

On top of that, the constant chorus of “care more” is also exhausting. It creates a societal backdrop of guilt and anxiety. And some of this is good – the world is filled with problems and it’s important to care about fixing them. But you can’t actually do everything, and establishing the mental affordance to care less about something without disregarding it entirely or feeling like an awful human is better for the ability to prioritize things in accordance with your values.

I’ve been talking loosely about cause areas, but this applies everywhere. A friend describes how in work meetings, the only conversational attitude ever used is this is so important, we need to work hard on that, this part is crucial, let’s put more effort here. Are these employees going to work three times harder because you gave them more things to focus on, and didn’t tell them to focus on anything else less? No.

I suspect that more “care less” messaging would do wonders on creating a life or a society with more yin, more slack, and a more relaxed and sensible attitude towards priorities and values.

It also implies a style of thinking we’re less used to than “finding reasons people should care”, but it’s one that can be done and it reflects actual mental processes that already exist.


Why don’t we see this more?

(Or “why couldn’t we care less”?)

Some suggestions:

  • It’s more incongruous with brains

Brains can create connections easily, but unlike computers, can’t erase them. You can learn a fact by practicing it on notecards or by phone reminders, but can’t un-learn a fact except by disuse. “Care less” is requesting an action from you that’s harder to implement than “care more”.

  • It’s not obvious how to care less about something

This might be a cultural thing, though. Ways to care less about something include: mindfulness, devoting fewer resources towards a thing, allowing yourself to put more time into your other interests, and reconsidering when you’re taking an action based on the thing and deciding if you want to do something else.

  • It sounds preachy

I suspect people feel that if you assert “care more about this”, you’re just sharing your point of view, and information that might be useful, and working in good faith. But if you say “care less about that”, it feels like you know their values and their point of view, and you’re declaring that you understand their priorities better than them and that their priorities are wrong.

Actually, I think either “care more” or “care less” can have both of those nuances. At its best, “maybe care less” is a helpful and friendly suggestion made in your best interests. There are plenty of times I could use advice along the lines of “care less”.

At its worst, “care more” means “I know your values better than you, I know you’re not taking them seriously, and I’m so sure I’m right that I feel entitled to take up your valuable time explaining why.”

  • It invokes defensiveness

If you treat the things you care about as cherished parts of your identity, you may react badly to people telling you to care less about them. If so, “care less about something you already care about” has a negative emotional effect compared to “care more about something you don’t already care about”.

(On the other hand, being told you don’t have to worry about something can be a relief. It might depend on if you see the thought in question as a treasured gift or as a burden. I’m not sure.)

  • It’s less memetically fit

“Care more about X” sounds more exciting and engaging than “care less about Y”, so people are more likely to remember and spread it.

  • It’s dangerous

Maybe? Maybe by telling people to “care less” you’ll remove their motivations and drive them into an unrelenting apathy. But if you stop caring about something major, you can care more about other things.

Also, if this happens and harms people, it already happens when you tell people to “care more” and thus radically change their feelings and values. Unfortunately, a process exists by which other people can insert potentially-hostile memes into your brain without permission, and it’s called communication. “Care less” doesn’t seem obviously more risky than the reverse.

  • We already do (sometimes)

Buddhism has a lot to say on relinquishing attachment and desires.

Self-help-type things often say “don’t worry about what other people think of you” or “peer pressure isn’t worth your attention”, although they rarely come with strategies.

Criticism implicitly says “care less about X”, though this is rarely explicitly turned into suggestions for the reader.

Effective Altruism is an example of this when it criticizes ineffective cause areas or charities. This image implicitly says “…So maybe care more about animals on farms and less about pets,” which seems like a correct message for them to be sending.

Image from Animal Charity Evaluators.


Anyway, maybe “care less” messaging doesn’t work well for some reason, but existing messaging is homogeneous in this way and I’d love to see people at least try for some variation.


Photo taken at the 2016 Bay Area Secular Solstice. During an intermission, sticky notes and markers were passed around, and we were given the prompt: “If someone you knew and loved was suffering in a really bad situation, and was on the verge of giving up, what would you tell them?” Most of them were beautiful messages of encouragement and hope and support, but this was my favorite.


Crossposted on LessWrong.

This blog has a Patreon. If you like what you’ve read, consider giving it your support so I can make more of it.

Fictional body language

Here’s something weird.

A common piece of advice for fiction writers is to “show, not tell” a character’s emotions. It’s not bad advice. It means that when you want to convey an emotional impression, describe the physical characteristics instead.

The usual result of applying this advice is that instead of a page of “Alice said nervously” or “Bob was confused”, you get a vivid page of action: “Alice stuttered, rubbing at her temples with a shaking hand,” or “Bob blinked and arched his eyebrows.”

The second thing is certainly better than the first thing. But a strange thing happens when the emotional valence isn’t easily replaced with an easily-described bit of body language. Characters in these books whose authors follow this advice seem to be doing a lot more yawning, trembling, sighing, emotional swallowing, groaning, and nodding than I or anyone I talk to does in real life.

It gets even stranger. These characters bat their lashes, or grip things so tightly their knuckles go white, or grit their teeth, or their mouths go dry. I variously either don’t think I do those, or wouldn’t notice someone else doing it.

Blushing is a very good example, for me. Because I read books, I knew enough that I could describe a character blushing in my own writing, and the circumstances in which it would happen, and what it looked like. I don’t think I’d actually noticed anyone blush in real life. A couple months after this first occurred to me, a friend happened to point out that another friend was blushing, and I was like, oh, alright, that is what’s going on, I guess this is a thing after all. But I wouldn’t have known before.

To me, it was like a piece of fictional body language we’ve all implicitly agreed represents “the thing your body does when you’re embarrassed or flattered or lovestruck.” I know there’s a particular feeling there, which I could attach to the foreign physical motion, and let the blushing description conjure it up. It didn’t seem any weirder than a book having elves.

(Brienne has written about how writing fiction, and reading about writing fiction, has helped her get better at interpreting emotions from physical cues. They certainly are often real physical cues – I just think the points where this breaks down are interesting.)

Online

There’s another case where humans are innovatively trying to solve the problem of representing feelings in a written medium, which is casual messaging. It’s a constantly evolving blend of your best descriptive words, verbs, emoticons, emojis, and now stickers and gifs and whatever else your platform supports. Let’s draw your attention to the humble emoticon, a marvel of written language. A handful of typographic characters represent a human face – something millions of years of evolution have fine-tuned our brains to interpret precisely.

(In some cases, these are pretty accurate: :) and ^_^ represent more similar things than :) and ;), even though ^_^ doesn’t even have the classic turned-up mouth of representation smiles. Body language: it works!)

:)

:|

:<

Now let’s consider this familiar face:

:P

And think of the context in which it’s normally found. If someone was talking to you in person and told a joke, or made a sarcastic comment, and then stuck their tongue out, you’d be puzzled! Especially if they kept doing it! Despite being a clear representation of a human face, that expression only makes sense in a written medium.

I understand why something like :P needs to exist: If someone makes a joke at you in meatspace, how do you tell it’s a joke? Tone of voice, small facial expressions, the way they look at you, perhaps? All of those things are hard to convey in character form. A stuck-out tongue isn’t, and we know what it means.

The ;) and :D emojis translate to meatspace a little better, maybe. Still, what’s the last time someone winked slyly at you in person?

You certainly can communicate complex things by using your words [CITATION NEEDED], but especially when in casual conversations, it’s nice to have expressive shortcuts. I wrote a bit ago:

Facebook Messenger’s addition of choosing chat colors and customizing the default emoji has, to me, made a weirdly big difference to what it feels like to use them. I think (at least with online messaging platforms I’ve tried before) it’s unique in letting you customize the environment you interact with another person (or a group of people) in.

In meatspace, you might often talk with someone in the same place – a bedroom, a college dining hall – and that interaction takes on the flavor of that place.

Even if not, in meatspace, you have an experience in common, which is the surrounding environment. It sets that interaction apart from all of the other ones. Taking a walk or going to a coffee shop to talk to someone feels different from sitting down in your shared living room, or from meeting them at your office.

You also have a lot of specific qualia of interacting with a person – a deep comfort, a slight tension, the exact sense of how they respond to eye contact or listen to you – all of which are either lost or replaced with cruder variations in the low-bandwidth context of text channels.

And Messenger doesn’t do much, but it adds a little bit of flavor to your interaction with someone besides the literal string of unicode characters they send you. Like, we’re miles apart and I may not currently be able to hear your voice or appreciate you in person, but instead, we can share the color red and send each other a picture of a camel in three different sizes, which is a step in that direction.

(Other emoticons sometimes take on their own valences: The game master in an online RPG I played in had a habit of typing only “ : ) ” in response when you asked him a juicy question, which quickly filled players with a sense of excitement and foreboding. I’ve tried using it since then in other platforms, before realizing that doesn’t actually convey that to literally anyone else. Similarly, users of certain websites may have a strong reaction to the typographic smiley “uwu”.)

Reasoning from fictional examples

In something that could arguably be called a study, I grabbed three books and chose some arbitrary pages in them to look at how character’s emotions are represented, particularly around dialogue.

Lirael by Garth Nix:

133: Lirael “shivers” as she reads a book about a monster. She “stops reading, nervously swallows, and reads the last line again”, and “breaths a long sigh of relief.”

428: She “nods dumbly” in response to another character, and stares at an unfamiliar figure.

259: A character smiles when reading a letter from a friend.

624: Two characters “exchange glances of concern”, one “speaks quickly”.

Most of these are pretty reasonable. I think the first one feels overdone to me, but then again, she’s really agitated when she’s reading the book, so maybe that’s reasonable? Nonetheless, flipping through, I think that this is Garth Nix’s main strategy. The characters might speak “honestly” or “nervously” or “with deliberation” as well, but when Nix really wants you to know how someone’s feeling, he’ll show you how they act.

American Gods by Neil Gaiman:

First page I flipped to didn’t have any.

364: A character “smiles”, “makes a moue”, “smiles again”, “tips her head to one side”. Shadow (the main character) “feels himself beginning to blush.”

175: A character “scowls fleetingly.” A different character “sighs” and his tone changes.

The last page also didn’t have any.

Gaiman does more laying out a character’s thoughts: Shadow imagines how a moment came to happen, or it’s his interpretation that gives flavor – “[Another character] looked very old as he said this, and fragile.”

Earth by David Brin:

First two pages I flipped to didn’t have dialogue.

428: Characters “wave nonchalantly”, “pause”, “shrug”, “shrug” again, “fold his arms, looking quite relaxed”, speak with “an ingratiating smile”, and “continue with a smile”.

207: Characters “nod” and one ‘plants a hand on another’s shoulder”.

168: “Shivers coursed his back. Logan wondered if a microbe might feel this way, looking with sudden awe into a truly giant soul.” One’s “face grows ashen”, another “blinks.” Amusingly, “the engineer shrugged, an expressive gesture.” Expressive of what?

Brin spends a lot of time living in characters’ heads, describing their thoughts. This gives him time to build his detailed sci-fi world, and also gives you enough of a picture of characters that it’s easy to imagine their reactions later on.

How to use this

I don’t think this is necessarily a problem in need of a solution, but fiction is trying to represent the way real people might act. Even of the premise of your novel starts with “there’s magic”, it probably doesn’t segue into “there’s magic and also humans are 50% more physically expressive, and they are always blushing.” (…Maybe the blushing thing is just me.) There’s something appealing about being able to represent body language accurately.

The quick analysis in the section above suggests at least three ways writers express how a fictional character is feeling to a reader. I don’t mean to imply that any is objectively better than the other, although the third one is my favorite.

1) Just describe how they feel. “Alice was nervous”, “Bob said happily.”

This gives the reader information. How was Alice feeling? Clearly, Alice was nervous. It doesn’t convey nervousness, though. Saying the word “nervous” does not generally make someone nervous – it takes some mental effort to translate that into nervous actions or thoughts.

2) Describe their action. A character’s sighing, their chin stuck out, their unblinking eye contact, their gulping. Sheets like these exist to help.

I suspect these work by two ways:

  1. You can imagine yourself doing the action, and then what mental state might have caused it. Especially if it’s the main character, and you’re spending time in their head anyway. It might also be “Wow, Lirael is shivering in fear, and I have to be really scared before I shiver, so she must be very frightened,” though I imagine that making this inference is asking a lot of a reader.
  2. You can visualize a character doing it, in your mental map of the scene, and imagine what you’d think if you saw someone doing it.

Either way, the author is using visualization to get you to recreate being there yourself. This is where I’m claiming some weird things like fictional body language develop.

3) Use metaphor, or describe a character’s thoughts, in such a way that the reader generates the feeling in their own head.

Gaiman in particular does this quite skillfully in American Gods.

[Listening to another character talk on and on, and then pause:] Shadow hadn’t said anything, and hadn’t planned to say anything, but he felt it was required of him, so said, “Well, weren’t they?”

[While in various degrees of psychological turmoil:] He did not trust his voice not to betray him, so he simply shook his head.

[And:] He wished he could come back with something smart and sharp, but Town was already back at the Humvee, and climbing up into the car; and Shadow still couldn’t think of anything clever to say”

Also metaphors, or images:

Chicago happened slowly, like a migraine.

There must have been thirty, maybe even forty people in that hall, and now they were every one of them looking intently at their playing cards, or their feet, or their fingernails, and pretending as hard as they could not to be listening.

By doing the mental exercises written out in the text, by letting your mind run over them and provoke some images in your brain, the author can get your brain to conjure the feeling by using some unrelated description. How cool is that! It doesn’t actually matter whether, in the narrative, it’s occurred to Shadow that Chicago is happening like a migraine. Your brain is doing the important thing on its own.


(Possible Facebook messenger equivalents: 1) “I’m sad” or “That’s funny!” 2) Emoticons / emotive stickers, *hug* or other actions 3) Gifs, more abstract stickers.)


You might be able to use this to derive some wisdom for writing fiction. I like metaphors, for one.

If you want to do body language more accurately, you can also pay attention to exactly how an emotion feels to you, where it sits in your body or your mind – meditation might be helpful – and try and describe that.

Either might be problematic because people experience emotions differently – the exact way you feel an emotion might be completely inscrutable to someone else. Maybe you don’t usually feel emotions in your body, or you don’t easily name them in your head. Maybe your body language isn’t standard. Emotions tend to derive from similar parts of the nervous system, though, so you probably won’t be totally off.

(It’d also be cool if the reader than learned about a new way to feel emotions from your fiction, but the failure mode I’m thinking of is ‘reader has no idea what you were trying to convey.’)

You could also try people-watching (or watching TV or a movie), and examining how you know someone is feeling a certain way. I bet some of these are subtle – slight shifts in posture and expression – but you might get some inspiration. (Unless you had to learn this by memorizing cues from fiction, in which case this exercise is less likely to be useful.)


Overall, given all the shades of nuance that go into emotional valence, and the different ways people feel or demonstrate emotions, I think it’s hardly surprising that we’ve come up with linguistic shorthands, even in places that are trying to be representational.


[Header image is images from the EmojiOne 5.0 update assembled by the honestly fantastic Emojipedia Blog.]

When you’re expecting the weird

Sometimes, the more I know about a topic, the less skeptical I am about new things in that field. I’m expecting them to be weird.

One category is deep sea animals. I’ve been learning about them for a long time, and when I started, nearly anything could blow my mind. I’d look up sources all the time because they all sounded fake. Even finding a source, I’d be skeptical. There’s no reason for anyone to photoshop that many pictures of that sea slug, sure, but on the other hand, LOOK AT IT.

seaslug

[Source]

Nowadays, I’ve seen even more deep sea critters, and I’m much less skeptical. I think you could make up basically any wild thing and I’d believe it. You could say: “NOAA discovered a fish with two tails that only mates on Thursdays.” Or “National Geographic wrote about this deep-sea worm that’s as smart as a dog and fears death.” And I’d be like “yeah, that seems reasonable, I buy it.”

Here’s a test. Five of these animals are real, and three are made up.

  1. A jellyfish that resembles a three-meter-diameter circular bedsheet
  2. A worm that, as an adult, has no DNA.
  3. A worm that branches as it ages, leaving it with one head but hundreds of butts.
  4. A worm with the body plan of a squid.
  5. A sponge evolved to live inside of fish gills.
  6. A sea slug that lives over a huge geographic region, but only in a specific two-meter wide range of depth.
  7. A copepod that’s totally transparent at some angles, and bright blue from others.
  8. A shrimp that shuts its claws so fast it creates a mini sonic boom.

(Answers at bottom of page. Control-F “answers” to jump there.)

Of course, I’m only expecting to be surprised about information in a certain sphere. If you told me that someone found a fish that had a working combustion engine, or spoke German, I’d call bullshit – because those things are clearly outside the realm of zoology.

Still, there’s stuff like this. WHY ARE YOU.

Some other categories where I have this:

  • Modern American politics
  • Florida Man stories
  • Head injury symptoms/aftermath
  • Places extremophiles live

Note that these aren’t cases where I tend to underapply skepticism – these are cases where, most of the time, not being skeptical works. If people were making up fake Florida Man stories, I’d have to start being skeptical again, but until then, I can rely on reality being stranger than I expect.

What’s the deal? Well, a telling instance of the phenomena, for me, is archaeal viruses.

  • Some of these viruses are stable and active in 95° C water.
  • This archaeal virus is shaped like a wine bottle.

  • This one is shaped like a lemon.

  • This one appears to have evolved independently and shares no genes with other viruses.
  • This one GROWS ON ITS OWN, outside of a host.
  • This one builds seven-sided pyramids on the surfaces of cells it infects.

pyramid.jpg

It has something to do with either lysis or summoning very small demons. [Source]

These are really surprising to me because I know a little bit about viruses. If you know next to nothing about viruses, a lemon-shaped virus probably isn’t that mind-blowing. Cells are sphere-shaped, right? A lemon shape isn’t that far from a sphere shape. The ubiquitous spaceship-shaped T4 is more likely to blow your mind.

bacteriophage

Don’t worry – it comes in peace, unless you happen to be E. coli. [Source]

Similarly, if you were a planet-hopping space alien first visiting earth, and your alien buddy told you about the giant garbage-bag shaped jellyfish, that probably wouldn’t be mind-blowing – for all you know, everything on earth looks like that. All information in that category is new to you, and you don’t have enough context for it to seem weird yet.

At the same time, if I studied archaeal viruses intensely, I’d probably get a sense of the diversity in the field. Some strange stuff like the seven-sided pyramids would still come along as it’s discovered, but most new information would fit into my models.

This suggests that for certain fields, there’s going to be some amount of familiarity where I’m surprised by all sorts of things, but on the tail ends, I either don’t know enough to be surprised – or already know everything that might surprise me. In the middle, I have just enough of a reference class that it frequently gets broken – and I end up concluding that everything is weird.


(Answers: 2, 5, and 6 are fictional. Details on the sea tarp jellyfish, the reverse hydra worm, the squid worm, the sea sapphire, and the mantis shrimp.)