How many neurons are there?

Image from NOAA, in the public domain.

I just finished a large project trying to estimate that. I’ve posted it on its own page here. Here’s the abstract:

We estimate that there are between about 10^24 neurons on earth, with about an order of magnitude uncertainty. Most of these are from insects, with significant contributions from nematodes and fish. For insects, we multiplied the apparent number of insects on earth by the number of neurons in a small insect, the fruit fly. Most other classes of animal contribute 10^22 neurons at most, and so are unlikely to change the final analysis. For nematodes, we looked at studies that provide an average number of nematodes per square meter of soil or the ocean floor, and multiplied them by the number of neurons in Caenorhabditis elegans, an average-sized nematode. Fish may also play a significant role. We neglected a few categories that probably aren’t significant, but could conceivably push the estimate up.

Using a similar but less precise process based on evolutionary history and biomass over time, we also estimate that there have been 10^33 neuron-years of work over the history of life, again with around an order of magnitude of uncertainty.

Advertisements

Male dairy calves, male chicks, and relative suffering from animal foods

Or: Do “byproduct” animals of food animal production significantly affect estimates of comparative suffering caused by those foods? No.

[Image adapted from this image by Flickr user Sadie_Girl, under a CC BY-SA 2.0 license.]

See, relatedly: What happens to cows in the US?

Short version

There’s a shared belief in animal-welfare-oriented effective altruism that eggs and chicken meat cause a great deal more suffering than beef or dairy (1). You can make big strides towards reducing the amount of suffering caused in your diet by eating fewer eggs and chicken, even if you don’t go fully vegetarian or vegan.

Julia Galef, Brian Tomasik, and William MacAskill have made different versions of this calculation, with different metrics, and have come to the same conclusion. These three calculations include only the animal used directly for production. (Details about the calculations and my modifications are in the long version below.) But the production of several kinds of animal product require bringing into existence animals that aren’t used for that product – like male calves born to lactating dairy cows, or male chicks born when producing egg-laying hens. I wondered if including these animals would significantly change the amount of suffering in various animal foods.

It turns out that even accounting for these other animals indirectly created during production, the amount of suffering relative to other animal foods doesn’t change very much. If you buy the premises of these quantitative ethical comparisons, beef and dairy make so much product using so few animals that they’re still 1-3 orders of magnitude better than eggs or chicken. Or rather, the message of “eat less chicken” and “if you’re going to eat animal products, eat dairy and beef” still makes sense even if we account for the maximum number of other animals created incidental to production of each food animal. I’m going to call these the “direct and incidental animals” (DIA) involved in a single animal’s worth of product.

The question is complicated by the fact that “incidental” animals still go into another part of the system. Day-old male chicks are used for pet and livestock food, and male dairy calves are raised for meat.

Given that these male calves are tied to dairy production, it seems unlikely that production of dairy and meat is what it would be if they weren’t connected. For instance, if there is less demand for dairy and thus fewer male dairy calves, it seems like one of the following should happen:

  1. No change to meat calf supply, less meat will be produced (DIA estimates seem correct)
  2. Proportionally more meat calves will be raised (original estimates seem correct)
  3. Something between the above (more likely)

Reframed: It depends whether demand for dairy increases the meat supply and makes it less profitable to raise meat cows, or whether demand for meat makes it more profitable to raise dairy cows, or both. I’m not an economist and don’t go into which one of these is the case. (I tried to figure this out and didn’t make much headway.) That said, it seems likely that the actual expected number of animal lives or days of suffering is somewhere between the initial numbers and my altered values for each source.

The most significant change I find from the original findings suggest that meat cows cause a fair bit more suffering over a longer period of time than the original calculations predict, only if demand for meat is significantly propping up the dairy industry. But even if that’s true, the suffering caused by beef is a little smaller than that caused by pork, and nowhere near as much as smaller animals.

Modifications to other estimates including direct and incidental animals (DIA)

Tomasik’s original estimate DIA Tomasik’s estimate Galef’s orginal estimate DIA Galef’s estimate
Milk 0.12 equivalent days of suffering caused per kg demanded 0.14 equivalent days of suffering caused per kg demanded 0.000057 max lives per 1000 calories of milk 0.00013 max lives per 1000 calories of milk
Beef 1.9 max equivalent days of suffering caused per kg demanded 4.74 max equivalent days of suffering caused per kg demanded 0.002469 max lives per 1000 calories 0.0029 max lives per 1000 calories
Eggs 110 equivalent days of suffering caused per kg demanded 125 equivalent days of suffering caused per kg demanded 0.048485 lives per 1000 calories 0.048485 lives per 1000 calories

That’s basically it. For a little more info and how I came to these conclusions, read on.

Longer version

On the topic of effectively helping animals, one thing I’ve heard a few times is that eating dairy and beef aren’t terribly harmful, since they come from such large animals that a serving of beef or milk is a very small part of the output of the animal. On the other hand, chickens are very small – an egg is a day’s output of an animal, and a family can eat an entire chicken in one dinner. Compare that with the fact that most chickens are raised in extremely unnatural and unpleasant conditions, and you have a framework for directly comparing the suffering that goes into different animal products.

This calculation has been made by three people I’m aware of – Brian Tomasik on his website, William MacAskill in his book Doing Good Better, and Julia Galef on her blog. The organization One Step for the Animals also recommends people stop eating chickens, on these grounds, but I didn’t find a similar breakdown on their website after a few minutes of looking. It’s still worth checking out, though. (Did you know chicken consumption, in pounds/year, has surpassed beef consumption and is still climbing, but only over the last 20 years?)

Galef compares calories per life. She includes the male chicks killed for each egg-laying hen.

Tomasik looks at “days of suffering caused per kg demanded”.

Macaskill briefly examines three factors: the number of animal years and lives that go into a year of eating in the average omnivorous American diet, and also numerical “quality of life” estimates from Bailey Norwood. (He doesn’t combine these factors numerically so much as use them to establish a basis for recommending people avoid chicken. I didn’t do an in-depth analysis of his, but safe to say that like the others, adding in other animal lives doesn’t seem to change his conclusions significantly.)

With pigs and meat chickens, the case is straightforward – both sexes are raised for meat, and suppliers breed animals to sell them and retain enough to continue breeding. The aged animals are eventually slaughtered as meat as well.

But only female hens lay eggs. Meat chickens and egg chickens raised at scale in the USA are two different breeds, so when a breeder produces laying hens, they wind up with more male chicks than are needed for breeding. Similarly, dairy cows have to give birth to a calf every season they produce milk. The average dairy cow gives 2.4 calves in her lifetime, and slightly less than 1.2 of those are male. The male egg chicks and male dairy calves are used for meat.

Aged dairy cows and egg-laying chickens are also sold as meat. “Spent hens” that are no longer commercially profitable, at 72 weeks old, are sold for ‘processed chicken meat’. (Other sources claim pet food or possibly just going to landfills. Pet food sounds reasonable, but landfills seem unlikely to me, at least for large operations.) There aren’t as many of these as either cows or chickens raised directly for meat, so they’re a comparatively small fraction, but they’re clearly still feeding into the meat system.

🐔

When talking about this, we quickly run into some economic questions, like “perhaps if the demand for dairy dropped, the meat industry would start raising more calves for meat instead?”

My intuition says it ought to shake out one way or the other – either decreasing demand for dairy cows results in the price of meat going up, or decreasing demand for meat results in demand for dairy cows going down.

In the egg case, male chicks aren’t literally put in a landfill, they’re ground and sold for pet food. Without this otherwise unused source of protein, would pet food manufacturers increase demand for some other kind of meat? It seems possible that both this would happen and that the price of pet food would increase. Then, maybe less would be bought to make up for the difference, at least in the long term – cheap pet food must be somewhat inelastic, at least in the short term?

My supply and demand curves suggest that both demand should decline and price should increase. That said, we’re leaving the sphere of my knowledge and I don’t know how to advise you here. For the moment, I’m comfortable folding in both animals produced in the supply chain for a product, and animals directly killed or used for a product. But based on the economic factors above, these still don’t equate to “how many animal lives / days are expected to be reduced in the long term by avoiding consumption of a given product.”

At the most, though, dairy cows bring an extra 1.2 meat cow into existence, meat cows bring an extra .167 dairy cow,  and each egg-laying hen brings an extra 1 male chicken that is killed around the first day. These are the “direct and incidental animals” created for each animal directly used during productive.

 

Some notes on the estimates below:

I ignored things like fish and krill meal that go into production. Tomasik notes that 37% of the global fish harvest (by mass) is ground and used for animal feed for farmed fish, chickens, and pigs. But this seems to be mostly from wild forage fish, not farmed fish, and wild populations are governed by a different kind of population optimum – niches. We’d guess that each fish removed from the environment frees up resources that will be eaten by, on average, one new fish. (Of course, populations we’re fishing seem to be declining, so something is happening, but it’s certainly not one-to-one.)

I also only looked at egg-laying chickens, meat cows, and dairy cows. This is because pork and other industries aren’t sex-segregated – all babies born are raised for the same thing. A few will be kept aside and used to produce more babies, but even the breeding ones will eventually be turned into meat. The amount of days these animals live probably affect Tomasik’s calculations somewhat, but the breeding animals are still the minority.

I also didn’t include a detailed analysis because if you’re concerned about animal welfare, you probably already don’t eat veal. (I’m going to assert that if you want to eat ethically treated food, avoid a meat whose distinguishing preparation characteristic is “force-feed a baby”.) Veal is a byproduct of the dairy industry, but a minority of the calves. Foie gras does have a multiplier effect because female geese don’t fatten up as much, and are killed early, so for each goose turned into foie gras, another goose is killed young.

Old dairy cows and laying hens are used for meat, but it’s a minority of the meat production. I didn’t factor this in. See What happens to cows in the US for more on cows.

Modifications to other estimates including direct and incidental animals (DIA)

Tomasik’s original estimate DIA Tomasik’s estimate Galef’s orginal estimate DIA Galef’s estimate
Milk 0.12 equivalent days of suffering caused per kg demanded 0.14 equivalent days of suffering caused per kg demanded 0.000057 max lives per 1000 calories of milk 0.00013 max lives per 1000 calories of milk
Beef 1.9 max equivalent days of suffering caused per kg demanded 4.74 max equivalent days of suffering caused per kg demanded 0.002469 max lives per 1000 calories 0.0029 max lives per 1000 calories
Eggs 110 equivalent days of suffering caused per kg demanded 125 equivalent days of suffering caused per kg demanded 0.048485 lives per 1000 calories 0.048485 lives per 1000 calories

DIA modifications to Tomasik’s estimate

(Days of equivalent suffering / kg)

To adjust this estimate, I added the extra “equivalent days of suffering caused per kg demanded” for the other animals:

Egg-laying chickens
(4 suffering per day of life in egg-laying chickens * 501 days of life) + 1 * (3 suffering per days of life in meat chickens * 1 day of life) / 16 kg of edible product over life of egg-laying chicken = 125 max equivalent days of suffering caused per kg demanded (vs 110)

Dairy cows
(2 suffering per day of life in milk cows * 1825 days of life) + 1.2 * (1 suffering per day of life in meat cows * 395 days of life) / 30000 kg of edible product over life of dairy cow = 0.14 max equivalent days of suffering caused per kg demanded (vs 0.12)

Meat cows
(1 suffering per day of life in meat cows * 395 days of life) + 0.167 * (2 suffering per day of life in dairy cows * 1825 days of life) / 212 kg of edible product over life of meat cow = 4.74 max equivalent days of suffering caused per kg demanded (vs 1.9)

The meat cow number is the only very different one here.

DIA modifications to Galef’s estimate

I adjusted this by adding other lives to Galef’s estimate of lives per 1000 calories:

Egg-laying chicken
Galef included this in her calculation of 0.048485 lives per 1000 calories of eggs.

Dairy cows
[0.000057 lives per 1000 calories of milk] * 2.2 = 0.00013 max lives per 1000 calories of milk
[0.000075 lives per 1000 calories of cheese] * 2.2 = 0.00017 max lives per 1000 calories of cheese

Meat cows
[0.002469 lives per 1000 calories of beef] * 1.167 = 0.0029 max lives per 1000 calories of beef

Other economic notes

I’m hoping someone who knows more here will be able to make use of the information I found.

The number of meat cows in the US has been broadly decreasing since 1970. The number of dairy cows has also been decreasing since at least 1965, but dairy consumption is increasing, because those cows are giving far more milk.

When dairy prices drop, dairy farmers are known to kill some of their herds and sell them for meat, leading to a drop in meat prices.

We would also expect dairies and beef farms to compete with each other for some of the same resources, like land and feed.

A friend wondered whether dairy steers are much smaller than beef cows, so if shifting the same volume of meat production to these steers would mean more animal lives. It turns out that dairy steers and beef cows are about the same weight at slaughter.


(1) With fish perhaps representing much more suffering than eggs or chickens, and other large meat sources like pigs somewhere in the middle.)


 

This blog has a Patreon. If you like what you’ve read, consider giving it a look.

An invincible winter

[Picture in public domain, taken by Jon Sullivan]

Early September brought Seattle what were to be some of the hottest days of the summer. For weeks, people had been turning on fans, escaping to cooler places to spend the day, buying out air conditioners, which most of the city didn’t own. I cowered in my room with an AC unit on loan from a friend lodged in the window, only going out walking when the sun had set.

That week, Eastern Washington was burning. It does that every summer. But this year, a lot of Eastern Washington was burning. Say it with me – 2017 was one of the worst fire years on record. That week, the ash from the fires drifted over Seattle. You smelled smoke everywhere in the city. The sky was gray. At sunrise and sunset, the sun was blood-red. One day, gray ash drifted down from the sky, the exact size of snowflakes. It dusted the cars and kept falling through the afternoon.

That day, people said the weather was downright apocalyptic. They weren’t entirely wrong.

Many people aren’t clear on what exactly a nuclear winter is. The mechanic is straightforward. When cities burn in the extreme heat of a nuclear blast – and we do mean cities, plural, most nuclear exchange scenarios involve multiple strikes for strategic reasons – they turn into soot, and the soot floats up. If enough ash from burned cities reaches the stratosphere, the upper layer of the atmosphere, it stays there for a long time. The ash clouds blot out the sun, cool the earth, and choke off the growth of crops. Within weeks, agriculture grinds to a halt.

There’s a lot of uncertainty over nuclear winter. But by one estimate, the detonation of less than 1% of the world’s nuclear arsenal – a fairly small war – could drop the temperatures by five degrees Celsius, and warm up slowly again over twenty years. The ozone layer would thin. Less rain would fall. Two billion people would starve.

On Tuesday and Wednesday that week, the temperature was predicted to reach over 100 degrees. It didn’t. The particulates in the air blocked enough of the sun’s heat that it barely hit the 90’s. Pedestrians didn’t quite breathe easier, but did sweat less. Our own tiny, toy model taste of a nuclear apocalypse.


I’d been feeling strange for the last few weeks, unrelatedly, and sitting at my desk for hours, my mind did a lot of wandering. I hoped things would be looking up – I’d just gotten back from an exciting conference with good friends, and also from seeing the solar eclipse.

I’d made the pilgrimage with friends. We drove for hours, east across the mountains the week before they burned. We crossed the Colombia River into Oregon, and finally, drove up a winding dirt road to a wide clearing with a small pond. I studied for the GRE in the shadows of dry pines. We played tug-of-war with the crayfish and watched the mayflies dance above the pond. The morning of, the sun climbed in the sky, and I had never appreciated how invisible the new moon is, or how much light the sun puts out – even when it was half-gone, we still had to peer through black plastic glasses to confirm that something had changed. But soon, it became impossible not to notice.

I kept thinking about what state I would have been thrown into if I hadn’t known the mechanism of an eclipse – how deep the pits of my spiritual terror could go. Whether it would be limited by biology or belief. As it is, it was only sort of spiritually terrifying, in a good way. The part of my brain that knew what was happening had spread that knowledge to all the other parts well, so I could run around in excitement and really appreciate the chill in the air, the eerie distortion of shadows into slivers, and finally, the moon sealing off the sun.

The solar corona.

The sunset-colored horizon all around the rim of the sky.

Stars at midday.

We left after the daylight returned, but while the moon was still easing away, eager to beat the crowds back to the city. I thought about the mayflies in the pond, and their brief lives – the only adults in hundreds of generations to see the sun, see the stars, and then see the sun again.

I thought something might shake loose in my brain. Things should have been looking up, but the adventures had scarcely touched the inertia. Oh, right, I had also been thinking a lot about the end of the world.

I wonder about the mental health of people who work in existential risk. I think it must vary. I know people who are terrified on a very emotional and immediate level, and I know people who clearly believe it’s bad but don’t get anxious over it, and aren’t inclined to start. I can’t blame them. I used to be more of the former, and now I’m not sure if it’s eased up or if I’m just not thinking about things that viscerally scare me anymore. I’m not sure the existential terror can tip you towards something you weren’t predisposed to. In my case, I don’t think the mental fog was from it. But the backdrop of existential horror certainly lent it an interesting flavor.


It’s late October now. I’ve pulled out the flannel and the space heater and the second blanket for the bed. When I went jogging, my hands got numb. I don’t mind – I like autumn, I like the descent into winter, heralded by rain and red leaves and darkness, and the trappings of horror and then of the harvest. Peaches in the summertime, apples in the fall. The seasons have a comforting rhyme to them.

That strange inertia hasn’t quite lifted, but I’m working on it. Meanwhile, the world continues to cant sideways. When we arranged the Stanislav Petrov day party in Seattle this year, to celebrate the day a single man decided not to start World War 3, I wondered if we should ease up on the “nuclear war is a terrifying prospect” theme we had laid on last year. I thought that had probably been on people’s minds already.

So geopolitical tensions are rising, and have been rising. The hemisphere gets colder. Not quite out of nostalgia, my mind keeps flickering back to last month, to not-quite-a-hundred-degrees Seattle, to the red sun.

There’s a beautiful quote from Albert Camus: “In the midst of winter, I found there was, within me, an invincible summer.” That Tuesday, like the momentary pass of the moon over the sun in mid-day, in the height of summer, I saw the shadow of a nuclear winter.



For a more detailed exploration of the mechanics of nuclear winter and why we need more research, look at this piece from Global Risk Research Network.

What do you do if a nuclear blast is going to go off near you? Read this piece. Maybe read it beforehand.

What do you do if you don’t want a nuclear blast to go off near you? The Ploughshares Fund is one of the canonical organizations funding work on reducing risks from nuclear weapons. You might also be interested in Physicians for Social Responsibility and the Nuclear Threat Initiative.

This blog has a Patreon. If you like what you’ve read, consider giving it a look.

What happens to cows in the US?

(Larger version. Image released under a CC-BY SA 4.0 license.)

There are 92,000,000 cattle in the USA. Where do they come from, what are they used for, and what are their ultimate fates?

I started this as part of another project, and was mostly interested in what happens to the calves of dairy cows. As I worked, though, I was astonished that I couldn’t easily find this information laid out elsewhere, and decided to publish it on its own to start.


Note: Numbers are not exactly precise, and come from a combination of raw data from 2014-2016 and guesswork. Also, the relative sizes on the graph (of arrows and boxes) are not accurate – they’re hand-sized based on eyeballing the numbers and the available settings in yEd. I’m a microbiologist, not a graphic designer, what do you expect? If that upsets you, try this version, which is also under a CC-BY SA 4.0 license. If you want to make a prettier or more accurate version, knock yourself out.

There are some changes from year to year, which might account for small (<5%) discrepancies. I also tried to generalize from practices used on larger farms (e.g. <1,000 cow operations), which make up a minority of the farms, but house a majority of the cattle.

In the write-up, I try to clearly label “male cattle” and “female cattle” or “female cows” when relevant, because this confused me to no end when I was gathering data.


Let’s start with dairy cows. There are 9,267,000 female cows actively giving milk this season (“milk cows”) in the USA. For a cow to give milk, it has to be pregnant and give birth. That means that 9,267,000 calves are born to milk cows every year.

Almost half of these are female. Most milk cows are impregnated at around 2 years with artificial insemination. There’s a huge market in bull sperm, and 5% of the sperm sold in the US is sex-selected, meaning that 90% of the sperm in a given application is one sex. Dairies are mostly interested in having more female cows, so it seems like 2.25% of the milk cow calves that would have been male (because of chance) are instead female (because of this technology).

The female calves almost all go back into being milk cows. The average dairy cow has 2.4 lactation periods before she’s culled, so she breeds at a little over her replacement rate. I’m actually still not 100% certain where that 0.2-nd female calf goes, but dairies might sell extra females to be beef cattle along with the males.

The 2,755,000 milk cows that are culled each year are generally turned into lean hamburger. They’re culled because of infection or health problems, or age and declining milk volume. They’re on average around 4 years old. (Cows can live to 10 years old.)

Male calves are, contrary to some claims, almost never just left to die. The veal industry exists, in which calves are kept in conditions ranging from “not that different from your average cow’s environment” to “absolutely terrible”, and are killed young for their meat. It seems like between 450,000 and 1,000,000 calves are killed for veal each year, although that industry is shrinking. I used the 450,000 number.

Some of the male calves are kept and raised, and their sperm is used to impregnate dairy cows. This article describes an artificial insemination company, which owns “1,700 dairy and beef bulls, which produce 15 million breeding units of semen each year.” That’s about 1 in 1,000, a minuscule fraction of the male calves.

The rest of those male calves, the dairy steers, are sold as beef cattle. After veal calves, we have 3,952,000 remaining male calves to account for. They make up 14% of the beef supply of the 30,578,000 cattle slaughtered annually. From those numbers, we’d guess that 4,060,000 dairy steers are killed yearly – and that’s close enough to the above estimate that I think we’ve found all of our male calves. That’s only a fraction of the beef supply, though – we’ll now turn our attention to the beef industry.

We imported 1,980,000 cattle from Canada and Mexico in 2015, mostly for beef. We also export a few, but it’s under 100,000, so I left if off the chart.

Most beef cows are bred on calf-cow operations, which either sell the calves to feedlots or raises calves for meat directly. To replace their stock, they either keep some calves to breed more cows, or buy them from seedstock operations (which sell purebred or other specialty cattle.) Based on the fact that 30,578,000 cattle are slaughtered annually (and we know how some of them are already killed), and that cattle are being bred at the replacement rate, it seems like each year, calf-cow operations generate 21,783,000 new calves.  There’s a lot of variation in how beef cattle are raised, which I’m mowing over in this simplified graph. In general, though, they seem to be killed at between 1.5 and 3 years old.

Of course, calf-cow operations also need breeding cattle to keep the operation running, so while some of those cows are raised only for meat, some are also returned to the breeding pool. (Seedstock operations must be fairly small – under 3% of cattle in the US are purebred – so I think calf-cow operations are the majority worth examining.) Once they’re no longer productive breeders, breeding animals are also culled for beef.

This article suggests that 14-15% of cows are culled annually, I think on cow-calf operations that raise cows for slaughter themselves (although possibly only on smaller farms). If that’s the case, then each year, they must create about 14.5% more calves than are used raised only for meat. This suggests that 21,783,000 cattle born to calf-cow operations are raised for meat, and the remaining 2,759,000 calves which will go back into breeding each year. These will mostly be females – there seems to be a 1:15-25 ratio of males to females on calf-cow operations – so disproportionately more males will go directly to beef.

By adding up the bottom numbers, we get ~30,600,000 cattle slaughtered per year. In terms of doing math, this is fortunate, because we also used that number to derive some of the fractions therein. We can also add up the top numbers to get 33,030,000 born, which is confusing. If we take out the 450,000 veal calves and the 1,980,000 imported calves, it drops back to the expected value, which I think means I added something together incorrectly. While I’m going to claim this chart and these figures are mostly right, please do let me know if you see holes in the math. I’m sure they’re there.


“Wow, Georgia, I’m surprised, I really thought this was going to veer off into the ethics of the dairy industry or something.”

Ha ha. Wait for Part 2.

This blog has a Patreon! If you like what you’ve read, consider checking it out.

Diversity and team performance: What the research says

(Photo of group of people doing a hard thing from Wikimedia user Rizimid, CC BY-SA 3.0.)

This is an extended version (more info, more sources) version of the talk I gave at EA Global San Francisco 2017. The other talk I gave, on extinction events, is  here. Some more EA-focused pieces on diversity, which I’ve read but which were assembled by the indomitable Julia Wise, are:

Effective altruism means effective inclusion

Making EA groups more welcoming

EA Diversity: Unpacking Pandora’s Box

Keeping the EA Movement welcoming

How can we integrate diversity, equity, and inclusion into the animal welfare movement?

Pitfalls in diversity outreach


There are moral, social, etc. reasons to care about diversity, all of which are valuable. I’m only going to look at one aspect, which is performance outcomes. The information I’m drawing from here are primarily meta-studies and experiments in a business context.

Diversity here mostly means demographic diversity (culture, age, gender, race) as well as informational diversity – educational background, for instance. As you might imagine, each of these has different impacts on team performance, but if we treat them as facets of the same thing (“diversity”), some interesting things fall out.

(Types of diversity which, as far as I’m aware, these studies largely didn’t cover: class/wealth, sexual orientation, non-cis genders, disability, most personality traits, communication style, etc.)

Studies don’t show that diversity has an overall clear effect, positive or negative, on the performance of teams or groups of people. (1) (2) The same may also be true on an organizational level. (3)

If we look at this further, we can decompose it into two effects (one where diversity has a neutral or negative impact on performance, and one where it has a mostly positive impact): (4) (3)

Social categorization

This is the human tendency to have an ingroup / outgroup mindset. People like their ingroup more. It’s an “us and them” mentality and it’s often totally unconscious. When diversity interacts with this, the effects are often – though not always – negative.

Diverse teams tend to have:

  • Lower feelings of group cohesion / identification with group
  • Worse communication (3)
  • More conflict (of productive but also non-productive varieties) (also the perception of more conflict) (5)
  • Biases

A silver lining: One of these ingrouping biases is the expectation that people more similar to us will also think more like us. Diversity clues us into diversity of opinions. (6) This gets us into:

Information processing 

— 11/9/17 – I’m much less certain about my conclusions in this section after further reading. Diversity’s effects on creativity/innovation and problem-solving/decision-making have seen mixed results in the literature. See the comments section for more details. I now think the counterbalancing positive force of diversity might mostly be as a proxy for intellectual diversity. Also, I misread a study that was linked here the first time and have removed it. The study is linked in the comments. My bad! —

Creative, intellectual work. (7) Diversity’s effects here are generally positive. Diverse teams are better at:

  • Creativity (2)
  • Innovation (9)
  • Problem solving. Gender diversity is possibly more correlated than individual intelligence of group members. (Note: A similarly-sized replication failed to find the same results. Taymon Beal kindly brought this to my attention after the talk.) (10)

Diverse teams are more likely to discuss alternate ideas, look at data, and question their own beliefs.


This loosely maps onto the “explore / exploit” or “divergent / convergent” processes for projects. (2)

    1. Information processing effects benefit divergent / explore processes.
    2. Social categorization harms convergent / exploit processes.

If your group is just trying to get a job done and doesn’t have to think much about it, that’s when group cohesiveness and communication are most important, and diversity is less likely to help and may even harm performance. If your group has to solve problems, innovate, or analyze data, diversity will give you an edge.


How do we get less of the bad thing? Teams work together better when you can take away harmful effects from social categorization. Some things that help:

    1. The more balanced a team is along some axis of diversity, the less likely you are to see negative effects on performance. (12) (7) Having one woman on your ten-person research team might not do much to help and might trigger social categorization. If you have five women, you’re more likely to see benefits.
    2. Remote teams are less biased (w/r/t gender). Online teams will be less prone to gender bias.
    3. Time. Obvious diversity becomes less salient to a group’s work over time, and diverse teams end up outperforming non-diverse teams. (13) (6) Recognition of less-obvious cognitive differences (e.g. personality and educational diversity) increases over time. As we might hope, the longer a group works together, the less surface-level differences matter.

This article has some ideas on minimizing problems from language fluency, and also for making globally dispersed teams work together better.


How do we get more of the good thing? Diversity is a resource – more information and cognitive tendencies. Having diversity is a first step. How do we get more out of it?

    1. At least for age and educational diversity, high need for cognition. This is the drive of individual members to find information and think about things. (It’s not the same as, or especially correlated to, either IQ or openness to experience (1)).

Harvard Business Review suggests that diversity triggers people to stop and explain their thinking more. We’re biased towards liking and not analyzing things we feel more comfortable with – the “fluency heuristic.” (14) This is uncomfortable work, but if people enjoy doing it, they’re more likely to do it, and get more out of diversity.

But need for cognition is also linked with doing less social categorization at all, so maybe diverse groups with high levels of this just get along better or are more pleasant for all parties. Either way, a group of people who really enjoy analyzing and solving problems are likely to get more out of diversity.

2) A positive diversity mindset. This means that team members have an accurate understanding of potential positive effects from diversity in the context of their work. (4) If you’re working in a charity, you might think that the group you might assign to brainstorming new ways to reach donors might benefit from diversity more than the group assigned to fix your website. That’s probably true. But that’s especially true if they understand how diversity will help them in particular. You could perhaps have your team brainstorm ideas, or look up how diversity affects your particular task. (I was able to find results quickly for diversity in fundraising, diversity in research, diversity in volunteer outreach… so there are resources out there.)


Again, note that diversity’s effect size isn’t huge. It’s smaller than the effect size of support for innovation, external and internal communication, vision, task orientation, and cohesion – all these things you might correctly expect correlate with performance more than diversity (8). That said, I think a lot of people [at EA Global] want to do these creative, innovative, problem-solving things – convince other people to change lives, change the world, stop robots from destroying the earth. All of these are really important and really hard, and we need any advantage we can get.


  1. Work Group Diversity
  2. Understanding the effects of cultural diversity in teams: A meta-analysis of research on multicultural work groups
  3. The effects of diversity on business performance: Report of the diversity research network
  4. Diversity mindsets and the performance of diverse teams
  5. The biases that punish racially diverse teams
  6. Time, Teams, and Task Performance
  7. Role of gender in team collaboration and performance
  8. Team-level predictors of innovation at work: A comprehensive meta-analysis spanning three decades of research
  9. Why diverse teams are smarter
  10. Evidence of a collective intelligence factor in the performance of human groups
  11. When and how diversity benefits teams: The importance of team members’ need for cognition
  12. Diverse backgrounds and personalities can strengthen groups
  13. The influence of ethnic diversity on leadership, group process, and performance: an examination of learning teams
  14. Diverse teams feel less comfortable – and that’s why they perform better

Evolutionary Innovation as a Global Catastrophic Risk

(This is an extended version of the talk I have at EA Global San Francisco 2017. Long-time readers will recognize it as an updated version of a post I wrote last year. It was wonderful meeting people there!)

graph.png

This is a graph of extinction events over the history of animal life.

There are five canonical major extinction events that have occurred since the evolution of multicellular life. Biotic replacement has been hypothesized as the major mechanism for two of them: the late Devonian extinction and the Permian-Triassic extinction. There are three other major events – the Great Oxygenation Event, End Ediacaran extinction, and the Anthropocene / Quaternary extinction.

Let’s look at four of them. The first actually occurs right before this graph starts.

I decided not to discuss the Great Oxygenation Event in the talk itself, but it’s also an example – photosynthetic cyanobacteria evolved and started pumping oxygen into the atmosphere, which after filling up oxygen sinks in rocks, flooded into the air and poisoned many of the anaerobes, leading to the “oxygen die-off” and the “rusting of the earth.” I excluded it because A) it wasn’t about multicellular life, which, let’s face it, is much more relevant and interesting, and B) I believe it happened over such a long amount of time as to be not worth considering on the same scale as the others.

(I was going to jokingly call these “animal x-risks”, but figured that might confuse people about that the point of the talk was.)

The End-Ediacaran extinction

ediacaran
“Disckonsia Costata” by Verisimilius is licensed under CC BY-SA 3.0

We don’t know much about Precambrian life, but it’s known as the “Garden of Ediacara” and seems to have been a peaceful time.

The Ediacaran sea floor was covered in a mat of algae and bacteria, and ‘critters’ – some were definitely animals, others we’re not sure – ate or lived on the mats. There were tunneling worms, the limpets, some polyps, and the sand-filled curiosities termed “vendozoans”. They may have been single enormous cells like today’s xenophylophores, with the sand giving them structural support. The fiercest animal is described as a “soft limpet” that eats microbes. They don’t seem to have had predators, and this period is sometimes known as the “Garden of Ediacara”. (1)

At 542 million years ago, something happens – the Cambrian explosion. In a very short 5 million years, a variety of animals evolve in a short window.

Molluscs, trilobites and other arthropods, a creative variety of worms eventually including the delightful Hallucigenia, and sponges exploded into the Cambrian. They’re faster and smarter than anything that’s ever existed. The peaceful Ediacaran critters are either outcompeted or gobbled up, and vanish from the fossil record. The first shelled animals indicate that predation had arrived, and that the gates of the Garden of Ediacara had closed forever.

The end-Devonian extinction

Jump forward a few million years – 50% of genuses go extinct. Marine species suffered the most in this event, probably due to anoxia.

There’s an unexpected possible culprit – plants around this time made a few evolutionary leaps that began the first forests. Suddenly a lot of trees pumping oxygen into the air lead to global cooling, and large amounts of soil lead to nutrient-rich runoff, which lead to widespread marine anoxia which decimates the ocean.

devonian
Gingko trees, some of the oldest tree lineages alive. Image by Jean-Pol Grandmont, under a CC BY-SA 3.0 license.

We do know that there were a series of extinction events, so forests were probably only a partial cause. The longer climate trend around the extinction was global warming, so the yo-yoing temperature (from general warming and cooling from plants) likely contributed to extinction. (2) It’s strange to think that the land before 375 million years ago didn’t have much in the way of soil – major root structures contributed to rock wearing away. Plus, once you have some soil, and once the first trees die and contribute their nutrients, you get more soil and more plants – a positive feedback loop.

The specific trifecta of evolutions that let forests take over land: significant root structures, complex vascular systems, and seeds. Plants prior to this were small, lichen-like, and had to reproduce in water. (3)

The Permian-Triassic extinction

96% of marine species go extinct. Most of this happens in a 20,000 year window, which is nothing in geologic time. This is the largest and most sudden prehistoric extinction known.

The cause of this one was confusing for a long time. We know the earth got warmer, or maybe cooler, and that volcanoes were going off, but the timing didn’t quite match up.

Volcanoes were going off for much longer than the extinction, and it looks like die-offs were happening faster than we’d expect from increasing volcanism, or standard climate change cycles. (4) One theory points out that die-offs line up with exponential or super-exponential growth, as in, from a replicating microbe. Remember high school biology?

One theory suggests Methanosarcina, an archaea that evolved the chemical process that turned organic carbon into methane around the same time. Remember those volcanoes? They were spewing enormous amounts of nickel – an important co-factor for that process.

permiantriassic
Methanosarcina, image from Nature

(Methanosarcina appeared to have gotten the gene from a cellulose-digesting bacteria – definitely a neat trick. (5) )

The theory goes that Methanosarcina picked up its new pathway, and flooded the atmosphere with methane, which raised the surface temperature of the oceans to 45 degrees Celsius and killed most life. (2)

This report is a little recent, and it’s certainly unique, so I don’t want to claim that it’s definitely confirmed, or sure on the same level that, say, the Chicxulub impact theory is confirmed. That said, at the time of this writing, the cause of the Permian-Triassic extinction is unclear, and the methanogen theory doesn’t seem to have been majorly criticized or debunked.

Quaternary and Anthropocene extinctions

Finally, I’m going to combine the Quaternary and Anthropocene events. They don’t show up on this chart because the data’s still coming in, but you know the story – maybe you’re an ice-age megafauna, or rainforest amphibian, and you are having a perfectly fine time, until these pretentious monkeys just walk out of the Rift Valley, and turn you into a steak or a corn farm.

anthropocene
Art by Heinrich Harder.

Because of humans, since 1900, extinctions have been happening at about a thousand times the background rate.

(Looking at the original chart, you might notice that the “background” number of extinctions appears to be declining over time – what’s with that? Probably nothing cosmic – more recent species are just more likely to survive to the present day.)

Impacts from evolutionary innovation

You can probably see a common thread by now. These extinctions were caused – at least in part – by natural selection stumbling upon an unusually successful strategy. Changing external conditions, like nickel from volcanoes or other climate change, might contribute by giving an edge to a new adaptation.

    1. In some cases, something evolved that directly competed the others – biotic replacement
    2. In others, something evolved that changed the atmosphere.
    3. I’m going to throw in one more – that any time a species goes extinct due to a new disease, that’s also an evolutionary innovation. Now, as far as we can tell, this is extremely rare in nature, but possible. (7)

Are humans at risk from this?

From natural risk? It seems unlikely. These events are rare and can take on the order of thousands of years or more to unfold, at which point we’d likely be able to do something about it.

That is, as far as we know – the fossil record is spotty. As far as I can tell, we were able to pin the worst of the Permian-Triassic extinction down to 20,000 years only because that’s how narrow the resolution on the fossil band formed at the time was. It might have actually been quicker.

Even determining if an extinction has happened or not, or if the rock just happened to become less good at holding fossils, is a struggle. I liked this paper not really for the details of extinction events (I don’t think the “mass extinctions are periodic” idea is used these days), but for the nitty gritty details of how to pull detailed data out of rocks.

That said, for calibrating your understanding, it seems possible that extinctions from evolutionary innovation are more common than mass extinctions involving asteroids (only one mass extinction has been solidly attributed to an asteroid: the Chicxulub impact that ended the reign of dinosaurs.) That’s not to say large asteroid impacts (bolides) don’t cause smaller extinctions – but one source estimated the bolide:extinction ratio to be 175:1. (2)

Plus, having a brain matters, and I think I can say it’s really unlikely that a better predator (or a new kind of plant) is going to evolve without us noticing. There are some parallels here with, say, artificial intelligence risk, but I think the connection is tenuous enough that it might not be useful.

If we learn that such an event is happening, it’s not clear what we’d do – it depends on specifics.

Synthetic biology

But consider synthetic biology – the thing where we design new organisms and see what happens. As capabilities expand, should we worry about lab escapes on an existential scale? I mean, it has happened in nature.

Evolution has spent billions of years trying to design better and better replicators. And yet, evolutionary innovation catastrophes are still pretty rare.

That said, people have a couple of advantages:

        1. We can do things on purpose. (I mean, a human working on this might not be trying to make a catastrophic geoweapon – but they might still be trying to make a really good replicator.)
        2. We can come up with entirely new things. When natural selection innovates, every incremental step on the way to the final result has to an improvement on what came before. It’s like if you tried to build a footbridge, but at every single step of building it, it had to support more weight than before. We don’t have those constraints – we can just design a bridge and then build it and then have people walk across it. We can design biological systems that nobody has seen before.

This question of if we can design organisms more effective than evolution is still open, and crucial for telling us how concerned we should be about synthetic organisms in the environment.

People are concerned about synthetic biology and the risk of organisms “escaping” from a lab, industrial setting, or medical setting into the environment, and perhaps persisting or causing local damage. They just don’t seem to be worried on an existential level. I’m not sure if they should be, but it seems like the possibility is worth considering.

For instance, a company once almost released large quantities of an engineered bacteria that turned out to produce soil ethanol in large enough quantities to kill all plants in a lab microcosm. It appears that we don’t have reason to think it would have outcompeted other soil biota and actually caused an existential or even a local catastrophe, but it was caught at the last minute and the implications are clearly troubling. (9)


  1. Ediacaran biota: The dawn of animal life in the shadow of giant protists
  2. On the causes of mass extinctions
  3. Terrestrial-Marine Teleconnections in the Devonian: Links between the Evolution of Land Plants, Weathering Processes, and Marine Anoxic Events
  4. The Permo-Triassic extinction
  5. Methanogenic burst in the End-Permian carbon cycle
  6. Natural Die-offs of Large Mammals: Implications for Conservation I’m pretty sure I’ve seen at least a couple other sources mention this, but can’t find them right now. I had Chytridiomycosis in mind as well. This seems like an important research project and obviously has some implications for, say, biology existential risk.
  7. Rather sensationalized description from Cracked.Com

Talking at EA Global

I’m speaking at both lightning talk sessions (Saturday and Sunday afternoon) at the Effective Altruism Global conference SF this weekend. Catch me talking about evolutionary innovation and extinction on Saturday (5:15), and diversity in teams on Sunday (4:00).

On the off chance that you’ll be at the conference but haven’t already met me, or perhaps know me and want to chat more, feel free to comment on this post or send me an email at eukaryotewritesblog (at) gmail.com to arrange meeting up and saying hello.

I was at a party the night before and got into at least six different conversations about the existential risk / biology overlap, so I’m expecting this weekend to be a really good time. See you there!

(If you can’t make it, I’ll post the talks and longer versions of what I talked about here afterwards.)

Fictional body language

Here’s something weird.

A common piece of advice for fiction writers is to “show, not tell” a character’s emotions. It’s not bad advice. It means that when you want to convey an emotional impression, describe the physical characteristics instead.

The usual result of applying this advice is that instead of a page of “Alice said nervously” or “Bob was confused”, you get a vivid page of action: “Alice stuttered, rubbing at her temples with a shaking hand,” or “Bob blinked and arched his eyebrows.”

The second thing is certainly better than the first thing. But a strange thing happens when the emotional valence isn’t easily replaced with an easily-described bit of body language. Characters in these books whose authors follow this advice seem to be doing a lot more yawning, trembling, sighing, emotional swallowing, groaning, and nodding than I or anyone I talk to does in real life.

It gets even stranger. These characters bat their lashes, or grip things so tightly their knuckles go white, or grit their teeth, or their mouths go dry. I variously either don’t think I do those, or wouldn’t notice someone else doing it.

Blushing is a very good example, for me. Because I read books, I knew enough that I could describe a character blushing in my own writing, and the circumstances in which it would happen, and what it looked like. I don’t think I’d actually noticed anyone blush in real life. A couple months after this first occurred to me, a friend happened to point out that another friend was blushing, and I was like, oh, alright, that is what’s going on, I guess this is a thing after all. But I wouldn’t have known before.

To me, it was like a piece of fictional body language we’ve all implicitly agreed represents “the thing your body does when you’re embarrassed or flattered or lovestruck.” I know there’s a particular feeling there, which I could attach to the foreign physical motion, and let the blushing description conjure it up. It didn’t seem any weirder than a book having elves.

(Brienne has written about how writing fiction, and reading about writing fiction, has helped her get better at interpreting emotions from physical cues. They certainly are often real physical cues – I just think the points where this breaks down are interesting.)

Online

There’s another case where humans are innovatively trying to solve the problem of representing feelings in a written medium, which is casual messaging. It’s a constantly evolving blend of your best descriptive words, verbs, emoticons, emojis, and now stickers and gifs and whatever else your platform supports. Let’s draw your attention to the humble emoticon, a marvel of written language. A handful of typographic characters represent a human face – something millions of years of evolution have fine-tuned our brains to interpret precisely.

(In some cases, these are pretty accurate: :) and ^_^ represent more similar things than :) and ;), even though ^_^ doesn’t even have the classic turned-up mouth of representation smiles. Body language: it works!)

:)

:|

:<

Now let’s consider this familiar face:

:P

And think of the context in which it’s normally found. If someone was talking to you in person and told a joke, or made a sarcastic comment, and then stuck their tongue out, you’d be puzzled! Especially if they kept doing it! Despite being a clear representation of a human face, that expression only makes sense in a written medium.

I understand why something like :P needs to exist: If someone makes a joke at you in meatspace, how do you tell it’s a joke? Tone of voice, small facial expressions, the way they look at you, perhaps? All of those things are hard to convey in character form. A stuck-out tongue isn’t, and we know what it means.

The ;) and :D emojis translate to meatspace a little better, maybe. Still, what’s the last time someone winked slyly at you in person?

You certainly can communicate complex things by using your words [CITATION NEEDED], but especially when in casual conversations, it’s nice to have expressive shortcuts. I wrote a bit ago:

Facebook Messenger’s addition of choosing chat colors and customizing the default emoji has, to me, made a weirdly big difference to what it feels like to use them. I think (at least with online messaging platforms I’ve tried before) it’s unique in letting you customize the environment you interact with another person (or a group of people) in.

In meatspace, you might often talk with someone in the same place – a bedroom, a college dining hall – and that interaction takes on the flavor of that place.

Even if not, in meatspace, you have an experience in common, which is the surrounding environment. It sets that interaction apart from all of the other ones. Taking a walk or going to a coffee shop to talk to someone feels different from sitting down in your shared living room, or from meeting them at your office.

You also have a lot of specific qualia of interacting with a person – a deep comfort, a slight tension, the exact sense of how they respond to eye contact or listen to you – all of which are either lost or replaced with cruder variations in the low-bandwidth context of text channels.

And Messenger doesn’t do much, but it adds a little bit of flavor to your interaction with someone besides the literal string of unicode characters they send you. Like, we’re miles apart and I may not currently be able to hear your voice or appreciate you in person, but instead, we can share the color red and send each other a picture of a camel in three different sizes, which is a step in that direction.

(Other emoticons sometimes take on their own valences: The game master in an online RPG I played in had a habit of typing only “ : ) ” in response when you asked him a juicy question, which quickly filled players with a sense of excitement and foreboding. I’ve tried using it since then in other platforms, before realizing that doesn’t actually convey that to literally anyone else. Similarly, users of certain websites may have a strong reaction to the typographic smiley “uwu”.)

Reasoning from fictional examples

In something that could arguably be called a study, I grabbed three books and chose some arbitrary pages in them to look at how character’s emotions are represented, particularly around dialogue.

Lirael by Garth Nix:

133: Lirael “shivers” as she reads a book about a monster. She “stops reading, nervously swallows, and reads the last line again”, and “breaths a long sigh of relief.”

428: She “nods dumbly” in response to another character, and stares at an unfamiliar figure.

259: A character smiles when reading a letter from a friend.

624: Two characters “exchange glances of concern”, one “speaks quickly”.

Most of these are pretty reasonable. I think the first one feels overdone to me, but then again, she’s really agitated when she’s reading the book, so maybe that’s reasonable? Nonetheless, flipping through, I think that this is Garth Nix’s main strategy. The characters might speak “honestly” or “nervously” or “with deliberation” as well, but when Nix really wants you to know how someone’s feeling, he’ll show you how they act.

American Gods by Neil Gaiman:

First page I flipped to didn’t have any.

364: A character “smiles”, “makes a moue”, “smiles again”, “tips her head to one side”. Shadow (the main character) “feels himself beginning to blush.”

175: A character “scowls fleetingly.” A different character “sighs” and his tone changes.

The last page also didn’t have any.

Gaiman does more laying out a character’s thoughts: Shadow imagines how a moment came to happen, or it’s his interpretation that gives flavor – “[Another character] looked very old as he said this, and fragile.”

Earth by David Brin:

First two pages I flipped to didn’t have dialogue.

428: Characters “wave nonchalantly”, “pause”, “shrug”, “shrug” again, “fold his arms, looking quite relaxed”, speak with “an ingratiating smile”, and “continue with a smile”.

207: Characters “nod” and one ‘plants a hand on another’s shoulder”.

168: “Shivers coursed his back. Logan wondered if a microbe might feel this way, looking with sudden awe into a truly giant soul.” One’s “face grows ashen”, another “blinks.” Amusingly, “the engineer shrugged, an expressive gesture.” Expressive of what?

Brin spends a lot of time living in characters’ heads, describing their thoughts. This gives him time to build his detailed sci-fi world, and also gives you enough of a picture of characters that it’s easy to imagine their reactions later on.

How to use this

I don’t think this is necessarily a problem in need of a solution, but fiction is trying to represent the way real people might act. Even of the premise of your novel starts with “there’s magic”, it probably doesn’t segue into “there’s magic and also humans are 50% more physically expressive, and they are always blushing.” (…Maybe the blushing thing is just me.) There’s something appealing about being able to represent body language accurately.

The quick analysis in the section above suggests at least three ways writers express how a fictional character is feeling to a reader. I don’t mean to imply that any is objectively better than the other, although the third one is my favorite.

1) Just describe how they feel. “Alice was nervous”, “Bob said happily.”

This gives the reader information. How was Alice feeling? Clearly, Alice was nervous. It doesn’t convey nervousness, though. Saying the word “nervous” does not generally make someone nervous – it takes some mental effort to translate that into nervous actions or thoughts.

2) Describe their action. A character’s sighing, their chin stuck out, their unblinking eye contact, their gulping. Sheets like these exist to help.

I suspect these work by two ways:

  1. You can imagine yourself doing the action, and then what mental state might have caused it. Especially if it’s the main character, and you’re spending time in their head anyway. It might also be “Wow, Lirael is shivering in fear, and I have to be really scared before I shiver, so she must be very frightened,” though I imagine that making this inference is asking a lot of a reader.
  2. You can visualize a character doing it, in your mental map of the scene, and imagine what you’d think if you saw someone doing it.

Either way, the author is using visualization to get you to recreate being there yourself. This is where I’m claiming some weird things like fictional body language develop.

3) Use metaphor, or describe a character’s thoughts, in such a way that the reader generates the feeling in their own head.

Gaiman in particular does this quite skillfully in American Gods.

[Listening to another character talk on and on, and then pause:] Shadow hadn’t said anything, and hadn’t planned to say anything, but he felt it was required of him, so said, “Well, weren’t they?”

[While in various degrees of psychological turmoil:] He did not trust his voice not to betray him, so he simply shook his head.

[And:] He wished he could come back with something smart and sharp, but Town was already back at the Humvee, and climbing up into the car; and Shadow still couldn’t think of anything clever to say”

Also metaphors, or images:

Chicago happened slowly, like a migraine.

There must have been thirty, maybe even forty people in that hall, and now they were every one of them looking intently at their playing cards, or their feet, or their fingernails, and pretending as hard as they could not to be listening.

By doing the mental exercises written out in the text, by letting your mind run over them and provoke some images in your brain, the author can get your brain to conjure the feeling by using some unrelated description. How cool is that! It doesn’t actually matter whether, in the narrative, it’s occurred to Shadow that Chicago is happening like a migraine. Your brain is doing the important thing on its own.


(Possible Facebook messenger equivalents: 1) “I’m sad” or “That’s funny!” 2) Emoticons / emotive stickers, *hug* or other actions 3) Gifs, more abstract stickers.)


You might be able to use this to derive some wisdom for writing fiction. I like metaphors, for one.

If you want to do body language more accurately, you can also pay attention to exactly how an emotion feels to you, where it sits in your body or your mind – meditation might be helpful – and try and describe that.

Either might be problematic because people experience emotions differently – the exact way you feel an emotion might be completely inscrutable to someone else. Maybe you don’t usually feel emotions in your body, or you don’t easily name them in your head. Maybe your body language isn’t standard. Emotions tend to derive from similar parts of the nervous system, though, so you probably won’t be totally off.

(It’d also be cool if the reader than learned about a new way to feel emotions from your fiction, but the failure mode I’m thinking of is ‘reader has no idea what you were trying to convey.’)

You could also try people-watching (or watching TV or a movie), and examining how you know someone is feeling a certain way. I bet some of these are subtle – slight shifts in posture and expression – but you might get some inspiration. (Unless you had to learn this by memorizing cues from fiction, in which case this exercise is less likely to be useful.)


Overall, given all the shades of nuance that go into emotional valence, and the different ways people feel or demonstrate emotions, I think it’s hardly surprising that we’ve come up with linguistic shorthands, even in places that are trying to be representational.


[Header image is images from the EmojiOne 5.0 update assembled by the honestly fantastic Emojipedia Blog.]

Book review: Barriers to Bioweapons

I spent a memorable college summer – and much of the next quarter – trying to run a particular experiment involving infecting cultured tissue cells with bacteria and bacteriophage. The experiment itself was pretty interesting, and I thought the underpinnings were both useful and exciting. To prepare, all I had to do was manage to get some tissue culture up and running. Nobody else at the college was doing tissue culture, and the only lab technician who had experience with it was out that summer.

No matter, right? We had equipment, and a little money for supplies, and some frozen cell lines to thaw. Even though neither I, nor the student helping me, nor my professor, had done tissue culture before, we had the internet, and even some additional help once a week from a student who did tissue culture professionally. Labs all around the world do tissue culture every day, and have for decades. Cakewalk.

Five months later, the entire project had basically stalled. The tissue cells were growing slower and slower, we hadn’t been able to successfully use them for experiments, our frozen backup stocks were rapidly dwindling and of questionable quality, and I was out of ideas on how to troubleshoot any of the myriad things that could have been going wrong. Was it the media? The cells? The environment? Was something contaminated? If so, what? Was the temperature wrong? The timing? I threw up my hands and went back to the phage lab downstairs, mentally retiring to a life of growing E. coli at slightly above room temperature.

It was especially frustrating, because this was just tissue culture. It’s a fundamental of modern biology. It’s not an unsolved problem. It was just benchwork being hard to figure out without hands-on expertise. All I can say if any disgruntled lone wolves trying to start bioterrorism programs in their basements were also between the third PDF from 1970 about freezing cells with a minimal setup and losing their fourth batch of cells because they gently tapped the container until it was cloudy but not cloudy enough, it’d be completely predictable if they gave up their evil plans right there and started volunteering in soup kitchens instead.

This is the memory I kept coming back to when reading Barriers to Bioweapons: The Challenges of Expertise and Organization for Weapons Development, by Sonia Ben Ouagrham-Gormley. I originally found her work on the Bulletin of Atomic Scientists’ website, which was a compelling selling point even before I read anything. She had written a book that contradicted one of my long-held impressions about bioweapons – that they’re comparatively cheap and easy to develop.

It was obscure enough that it wasn’t at the library, but at the low cost of ending up on every watchlist ever, I got it from Amazon and can ultimately recommend it. I think it’s a well-researched and interesting contrary opinion to common intuitions about biological weapons, which changed my mind about some of those.

I’ve written before:

For all the attention drawn by biological weapons, they are, for now, rare. […] This should paint the picture of an uneasy world. It certainly does to me. If you buy arguments about why risk from bioweapons is important to consider, given that they kill far fewer people than many other threats, then this also suggests that we’re in an unusually fortunate place right now – one where the threat is deep and getting deeper, but nobody is actively under attack.

Barriers to Bioweapons argues that actually, we’re not all living on borrowed time – that there are real organizational and expertise challenges to successfully creating bioweapons. She then discusses specific historical programs, and their implications for biosecurity in the future.

The importance of knowledge transfer

The first part of the book discusses in detail how tacit knowledge spreads, and how scientific progress is actually accomplished in an organization. I was fascinated by how much research exists here, for science especially – I could imagine finding some of this content in a very evidence-driven book on managing businesses, but I wouldn’t have thought I could find the same for, e.g., how switching locations tends to make research much harder to replicate because available equipment and supplies have changed just slightly, or that researchers at Harvard Medical School publish better, more-frequently-cited articles when they and their co-authors work in the same building.

Basically, this book claims – and I’m inclined to agree – that spreading knowledge about specific techniques is really, really hard. What makes a particular thing work is often a series of unusual tricks, the result of trial and error, that never makes it into the ‘methods’ of a journal. (The hashtag #OverlyHonestMethods describes this better than I could.)

 

 

 

 

All of that tacit knowledge is promoted by organizational structures and stored in people, so the movement and interaction of people is crucial in sharing knowledge. Huge problems arise when that knowledge is lost. The book describes the Department of Energy replacing nuclear weapons parts in the late 1990s, and realizing that they no longer knew how to make a particular foam crucial to thermonuclear warheads, that their documentation for the foam’s production was insufficient, and that anyone who had done it before was long retired. They had to spend nine years and 70 million dollars inventing a substitute for a single component.

Every now and then when reading this, I was tempted to think “Oh come on, it can’t be that hard.” And then I remembered tissue culture.

The thing that went wrong that summer was a lack of tacit knowledge. Tacit knowledge is very, very slow to build, and you can either do it by laboriously building that knowledge from scratch, or by learning from someone else who does. Bioweapon programs tend to fail because their organizations neither retain nor effectively share tacit knowledge, and so their hopeful scientific innovations take extremely long and often never materialize. If you can’t solve the problems that your field has already solved, you’re never going to be able to solve new ones.

For a book on why bioweapons programs have historically failed, this section seems like it would be awkwardly useful reading for scientists or even anyone else trying to build communities that can effectively research and solve problems together. Incentives and cross-pollination are crucial, projects with multiple phases should have those phases integrated vertically, tacit knowledge stored in brains is important.

Specific programs

In the second part of the book, Ouagrham-Gormley discusses specific bioweapons programs – American, Soviet, Iraqi, South African, and that of the Aum Shinrikyo cult – and why they failed at one or more of these levels, and why we might expect future programs to go the same way. It’s true that all of these programs failed to yield much in the way of military results, despite enormous expenditures of resources and personnel, and while I haven’t fact checked the section, I’m tempted to buy her conclusions.

Secrecy can be lethal to complicated programs. Because of secrecy constraints:

  • Higher-level managers or governments have to put more faith in lower-level managers and their results, letting them steal or redirect resources
  • Sites are small and geographically isolated from each other
  • Scientists can’t talk about their work with colleagues in other divisions
  • Collaboration is limited, especially internationally
  • Facilities are more inclined to try to be self-sufficient, leading to extra delays
  • Maintaining secrecy is costly
  • Destroying research or moving to avoid raids or inspections sets back progress

Authoritarian leadership structures go hand in hand with secrecy, and have similarly dire ramifications:

  • Directives aren’t based in scientific plausibility
  • Focus on results only means that researchers are incentivized to make up results to avoid harsh punishments
  • Supervisors are also incentivized to make up results, which works, because their supervisors don’t understand what they’re doing
  • Feedback only goes down the hierarchy, suggestions from staff aren’t passed up
  • Working in strict settings is unrewarding and demoralizes staff
  • Promotion is based on political favor, not expertise, and reduces quality of research
  • Power struggles between staff reduce ability to cooperate

Sometimes cases are more subtle. The US bioweapons program ran from roughly 1943 to 1969, and didn’t totally fall prey to either of these – researchers and staff met at Fort Detrick at different levels and cross-pollinated knowledge with relative freedom. Crucially, it was “secret but legal, as it operated under the signature of the Biological Weapons Convention (BWC). Therefore, it could afford to maintain a certain degree of openness in its dealings with the outside world.”

Its open status was highly unusual. Nonetheless, while it achieved a surprising amount, the US program still failed to produce a working weapon after 27 years. It was closed later when the US ratified the BWC itself.

Ouagrham-Gormley says this failure was mostly due to a lack of collaboration between scientists and the military, shifting infrastructure early on, and diffuse organization. The scientists at Fort Detrick made impressive research progress, including dozens of vaccines, and research tools including decontamination with formaldehyde, negative air pressure in pathogen labs, and the laminar flow fume hood used ubiquitously for biological work in labs across the world.

laminar_flow_hood_2
Used for, among other things, tissue culture. || Public domain by TimVickers.

But research and weaponization are two different things, and military and scientific applications rarely met. The program was never considered a priority by the military. In fact, its leadership (responsibilities and funding decisions) in the government  was ambiguously presided over by about a dozen agencies, and it was reorganized and re-funded sporadically depending on what wars were going on at the time. Uncertainty and a lack of coordination ultimately lead the program nowhere. It was amusing to learn that the same issue plaguing biodefense in the US today was also responsible for sinking bioweapons research decades ago.

Ouagrham-Gormley discussed the Japanese Aum Shinrikyo cult’s large bioweapons efforts, but didn’t discuss Japan’s military bioweapon program, Unit 731, which ran from 1932 to 1935 and included testing numerous agents on Chinese civilians, and a variety of attacks on Chinese cities. While the experiments conducted are among the most horrific war crimes known, its war use was mixed – release of bombs containing bubonic-plague infected fleas, as well as other human, livestock, and crop diseases – killed between 200,000 and 600,000. Unless I’m very wrong, this makes that the largest modern bioweapon attack. Further attacks were planned, including on the US, but the program was ended and evidence was destroyed when Japan surrendered in World War II.

I haven’t looked into the case too much, but it’s interesting because that program appears to have had an unusually high death toll (for a bioweapon program). As far as I can tell, some factors were: the program having general government approval and lots of resources, stable leadership, a main location, and its constant testing of weapons on enemy civilians, which added to the death toll – they didn’t wait as long to develop weapons that were perfect, and gathered data on early tests, without much concern for secrecy. This program predated the others, which might have been a factor in its ability to test weapons on civilian populations (even though the program was technically forbidden by the 1925 Germ Warfare provision of the Geneva Conventions).

Ramifications for the future

One interesting takeaway is that covertness has a substantial cost – forcing a program to “go underground” is a huge impediment to progress. This suggests that the Biological Weapons Convention, which has been criticized for being toothless and lacking provisions for enforcement, is actually already doing very useful work – by forcing programs to be covert at all. Of course, Ouagrham-Gormley recommends adding those provisions anyways, as well as checks on signatory nations – like random inspections – that more effectively add to the cost of maintaining secrecy for any potential efforts. I agree.

In fact, it’s working already. Consider:

  • In weapons programs, expertise is crucial, both in manufacturing and in the relevant organisms but also bioweapons themselves.
  • The Biological Weapons Convention has been active since 1975. The huge Soviet bioweapon program continued secretly, but as shrinking in the late 1980’s, and was officially acknowledged and ended in 1992.
  • While the problem hasn’t disappeared since then, new experts in bioweapon creation are very rare.
  • People working on bioweapons before 1975 are mostly already retired.

As a result, that tacit knowledge transfer is being cut off. A new state that wanted to pick up bioweapons would have to start from scratch. The entire field has been set back by decades, and for once, that statement is a triumph.

Another takeaway is that the dominant message, from the government and elsewhere, about the perils of bioweapons needs to change. Groups from Japan’s 451 Unit to al-Qaeda have started bioweapon programs because they learned that the enemy was scared that they would. This suggests that the meme “bioweapons are cheap, easy, and dangerous” is actively dangerous for biodefense. Aside from that, as demonstrated by the rest of the book, it’s not true. And because it encourages groups to make bioweapons, we should perhaps stop spreading it.

(Granted, the book also relays an anecdote from Shoko Ashara, the head of the Aum Shinrikyo cult, who after its bioterrorism project failure “speculat[ed] that U.S. assessments of the risk of biological terrorism were designed to mislead terrorist groups into pursuing such weapons.” So maybe there’s something there, but I strongly suspect that such a design was inadvertent and not worth relying on.)

I’m overall fairly convinced by the message of the book, that bioweapons programs are complicated and difficult, that merely getting a hold of a dangerous agent is the least of the problems of a theoretical bioweapons program, and that small actors are unlikely to be able to effectively pull this off now.

I think Ouagrham-Gormley and I disagree most on the dangers of biotechnology. This isn’t discussed much in the book, but when she references it towards the end, she calls it “the so-called biotechnology revolution” and describes the difficulty and hidden years of work that have gone into feats of synthetic biology, like synthesizing poliovirus in 2002.

It makes sense that the early syntheses of viruses, or other microbiological works of magic, would be incredibly difficult and take years of expertise. This is also true for, say, early genome sequencing, taking thousands of hours of hand-aligning individual base pairs. But it turns out being able to sequence genomes is kind of useful, and now…

costpergenome2015_4

That biotechnology is becoming more accessible seems true, and the book, for me, throws into a critical light the ability to keep track somehow of accessible it is. Using DIYbio hobbyists as a case study might be valuable, or looking at machines like this “digital-to-biological converter for on-demand production of biologics”.

How low are those tacit knowledge barriers? How low will they be? There are obvious reasons to not necessarily publish all of these results, but somebody ought to keep track.

Ouagrham-Gormley does stress, I think accurately, that getting a hold of a pathogen is a small part of the problem. In the past, I’ve made the argument that biodefense is critical because “the smallpox genome is online and you can just download it” – which, don’t get me wrong, still isn’t reassuring – but that particular example isn’t immediately a global catastrophe. The US and Soviet Russia tried weaponizing smallpox, and it’s not terribly easy. (Imagine that you, you in particular, are evil, and have just been handed a sample of smallpox. What are you going to do with it? …Start some tissue culture?)

(Semi-relatedly, did you know that the US government has enough smallpox vaccine stockpiled for everyone in the country? I didn’t.)

…But maybe this will become less of a barrier in the future, too. Genetic engineering might create pathogens more suited for bioweapons than extant diseases. They might be well-tailored enough not to require dispersal via the clunky, harsh munitions that have stymied past efforts to turn delicate microbes into weapons. Obviously, natural pandemics can happen without those – could human alteration give a pathogen that much advantage over the countless numbers of pathogens randomly churned out of humans and animals daily? We don’t know.

The book states: “In the bioweapons field, unless future technologies can render biomaterials behavior predictable and controllable… the role of expertise and its socio-organizational context will remain critically important barriers to bioweapons development.”

Which seems like the crux – I agree with that statement, but predictable and controllable biomaterials is exactly what synthetic biology is trying to achieve, and we need to pay a lot of attention to how these factors will change in the future. Biosafety needs to be adaptable.

meme

At least, biodefense in the future of cheap DNA synthesis will probably still have a little more going for it than ad campaigns like this.

[Cross-posted to the Global Risk Research Network.]

Metablogging

Some housekeeping notes (not your monthly blog post):

I changed the format because I didn’t like the text settings on the old one. Let me know if anything looks broken. (In particular, the main type looks weird to me, but it’s ostensibly the same font and size, so I’m not sure why.)

I added a blogroll to this blog. A short version appears in the sidebar, a longer version appears on its own page.

The official tumblr of this blog is eukaryotetumbles.tumblr.com.

A couple pages on this blog you may not have been aware existed: The commissions page, the “List of online literature I like” page.

This is a really cool video of a jellyfish I thought you might like. (I didn’t take it.)

Suggestions for new posts, feedback, fact-checking, spambots, etc., are always welcome at eukaryotewritesblog (at) gmail.com.

As a human with bills to pay, I’m vaguely considering ways of monetizing my writing here. I know that Patreon is a thing people sometimes use successfully. I think another interesting approach would be one where I provide a list of post topics that are in my to-write queue, and people commit some money towards whichever ones they want to read, and I get the money once the post is published – but I don’t think a mechanism there already exists, and it sounds like a pain to set up. If you have any thoughts or ideas in this area, I’d be curious to hear them.

Finally, I’m still looking for ways to make a nice-looking online dichotomous key. Let me know if you have ideas!