Author Archives: Georgia

About Georgia

Research and writing on effective altruism, risk, humans, animals, and microbes. Blog @ eukaryotewritesblog.com. I also write at globalriskresearch.org.

Why was smallpox so deadly in the Americas?

In Eurasia, smallpox was undoubtedly a killer. It came and went in waves for ages, changing the course of empires and countries. 30% of those infected with the disease died from it. This is astonishingly high mortality from a disease – worse than botulism, Lassa Fever, tularemia, the Spanish flu, Legionnaire’s disease, and SARS.

In the Americas, smallpox was a rampaging monster.

When it first appeared Hispaniola in 1518, it spread 150 miles in four months and killed 30-50% of people. Not just of those infected, of the entire population1. It’s said to have infected a quarter of the population of the Aztec Empire within two weeks, killing half of those2, and laying the stage for another disease to kill many more3. 

Then, alongside other diseases and warfare, it contributed to 84% of the Incan Empire dying4.

Among the people who sometimes traded at the Hudson Bay Company’s Cumberland House on the Seskatchewan River in 1781 and 1782, 95% seemed to have died. Of them, the U’Basquiau (also called, I believe, the Basquia Cree people) were entirely killed5.

Over time, smallpox killed 90% of the Mandan tribe, along with 80% of people in the Columbia River region, 67% of the Omahas, and half of the Piegan tribe and of the Huron and Iroquois Confederations6.

Here are some estimates of the death rates between ~1605 and 1650 in various Northeastern American groups. This was during a time of severe smallpox epidemics. Particularly astonishing figures are highlighted (mine).

highlightedtable

Figure adapted from European contact and Indian depopulation in the Northeast: The timing of the first epidemics[^7]

Most of our truly deadly diseases don’t move quickly or aren’t contagious. Rabies, prion diseases, and primary amoebic meningoencephalitis have more or less 100% fatality rates. So do trypanosomiasis (African sleeping sickness) and HIV, when untreated.

When we look at the impact of smallpox in the Americas, we see extremely fast death rates that are worse than the worst forms of Ebola.

What happened?

In short, probably a total lack of previous exposure to smallpox and the other pathogenic European diseases, combined with cultural responses that helped the pathogen spread. The fact that smallpox was intentionally spread by Europeans in some cases probably contributed, but I’m not sure how much.

Virgin soil

Smallpox and its relatives in the orthopox family – monkeypox, cowpox, horsepox, and alastrim (smallpox’s milder variant) – had been established in Eurasia and Africa for centuries. Exposure to one would give some immune protection to the others. Variolation, a cruder version of vaccination, was also sometimes practiced.

Between these, and the frequent waves of outbreaks, a European adult would have survived some kind of direct exposure to smallpox-like antigens in the past, and would have the protection of antibodies to it, preventing future sickness. They would also have had, as children, the indirect protection of maternal antibodies, protecting them as children1.

In the Americas, everyone was exposed to the most virulent form of the disease with no defenses. This is called a “virgin soil epidemic”.

In this case, epidemics would stampede through occasionally, ferociously but infrequently enough for any given tribe that antibodies wouldn’t successfully form, and maternal protection didn’t develop. Many groups were devastated repeatedly by smallpox outbreaks over decades, as well as other European diseases: the Cocolizti epidemics3, measles, influenza, typhoid fever, and others7.

In virgin soil epidemics, including these ones, disease strikes all ages: children and babies, the elderly and strong young adults6. This sort of indiscriminate attack on all age groups is a known sign in animal populations that a disease is extremely lethal8. In humans, it also slows the gears of society to a halt.

When so much of the population of a village was too sick to move, not only was there nobody to tend crops or hunt – setting the stage for scarcity and starvation – but there was nobody to fetch water. Dehydration is suspected as a major cause of death, especially in children16. Very sick mothers would also be unable to nurse infants6

Other factors that probably contributed:

Cultural factors

Native Americans had some concept of disease transmission – some people would run away when smallpox arrived in their village, possibly carrying and spreading the germ7. They also would steer clear of other tribes that had it. That said, many people lived in communal or large family dwellings, and didn’t quarantine the sick to private areas. They continued to sleep alongside and spend time with contagious people6.

In addition, pre-colonization Native American measures against diseases were probably somewhat effective to pre-colonization diseases, but tended to be ineffective or harmful for European diseases. Sweat baths, for instance, could have spread the disease and wouldn’t have helped9. Transmission could also have occurred during funerals10

Looking at combinations of the above factors, death rates of 70% and up are not entirely unsurprising.

Use as a bioweapon

Colonizers repeatedly used smallpox as an early form of biowarfare against Native Americans, knowing that they were more susceptible. This included, at times, intentionally withholding vaccines from them. Smallpox also spreads rapidly naturally, so I’m not sure how much contributed to the overall extreme death toll, although it certainly resulted in tremendous loss of life.

Probably not responsible:

Genetics. A lack of immunological diversity, or some other genetic susceptibility, has been cited as a possible reason for the extreme mortality rate. This might be particularly expected in South America, because of the serial founder effect – in which a small number of people move away from their home community and start their own, repeated over and over again, all the way across Beringia and down North America, into South America9.

That said, this theory is considered unlikely today1. For one, the immune systems of native peoples of the Americas react similarly to vaccines as the immune systems of Europeans10. For another, groups in the Americas also had unusually high mortality from other European diseases (influenza, measles, etc), but this mortality decreased relatively quickly after first exposure – quickly enough that genetic attributes couldn’t change quickly enough to explain the response10.

Some have also proposed general malnutrition, which would weaken the immune system and make it harder to fight off smallpox. This doesn’t seem to have been a factor1. Scarce food was a fact of life in many Native American groups, but then again, the same was true for European peasants, who still didn’t suffer as much from smallpox.

Africa

Smallpox has had a long history in parts of Africa – the earliest known instance of smallpox infection comes from Egyptian mummies2, and frequent European contact throughout the centuries spread the disease to the parts they interacted with. Various groups in North, East, and West Africa developed their own variolation techniques11.

However, when the disease was introduced to areas it hadn’t existed before, we saw similarly astounding death rates as in the Americas: one source describes mortality rates of 80% among the Griqua people of South Africa. Less quantitatively, it describes how several Hottentot tribes were “wiped out” by the disease, that some tribes in northern Kenya were “almost exterminated”, and that parts of the eastern Congo River basin became “completely depopulated”2.

This makes it sound like smallpox acted similarly in unexposed people in Africa. It also lends another piece of evidence against the genetic predisposition hypothesis – that the disease would act similarly on groups so geographically removed.

Wikipedia also tells me that smallpox was comparably deadly when it was first introduced to various Australasian islands, but I haven’t looked into this further.

Extra

Required reading on humanism, smallpox, and smallpox eradication.


When smallpox arrived in India around 400 AD, it spurred the creation of Shitala, the Hindu goddess of (both causing and curing) smallpox. She is normally depicted on a donkey, carrying a broom for either spreading germs or sweeping out a house, and a bowl of either smallpox germs or of cool water.

The last set of images on this page also seems to be a depiction of the goddess, and captures something altogether different, something more dark and visceral.


Finally, this blog has a Patreon. If you like what you’ve read, consider giving it your support so I can make more of it.

References


  1. Riley, J. C. (2010). Smallpox and American Indians revisited. Journal of the history of medicine and allied sciences65(4), 445-477. 
  2. Fenner, F., Henderson, D. A., Arita, I., Jezek, Z., Ladnyi, I. D., & World Health Organization. (1988). Smallpox and its eradication. 
  3. Acuna-Soto, R., Sthale, D. W., Cleaveland, M. K., & Therrell, M. D. (2002). Megadrought and megadeath in 16th century Mexico. Revista Biomédica13, 289-292. 
  4. Beer, M., & Eisenstat, R. A. (2000). The silent killers of strategy implementation and learning. Sloan management review41(4), 29. 
  5. Houston, C. S., & Houston, S. (2000). The first smallpox epidemic on the Canadian Plains: in the fur-traders’ words. Canadian Journal of Infectious Diseases and Medical Microbiology11(2), 112-115. 
  6. Crosby, A. W. (1976). Virgin soil epidemics as a factor in the aboriginal depopulation in America. The William and Mary Quarterly: A Magazine of Early American History, 289-299. 
  7. Sundstrom, L. (1997). Smallpox Used Them Up: References to Epidemic Disease in Northern Plains Winter Counts, 1714-1920. Ethnohistory, 305-343. 
  8. MacPhee, R. D., & Greenwood, A. D. (2013). Infectious disease, endangerment, and extinction. International journal of evolutionary biology, 2013. 
  9. Snow, D. R., & Lanphear, K. M. (1988). European contact and Indian depopulation in the Northeast: the timing of the first epidemics. Ethnohistory, 15-33. 
  10. Walker, R. S., Sattenspiel, L., & Hill, K. R. (2015). Mortality from contact-related epidemics among indigenous populations in Greater Amazonia. Scientific reports5, 14032. 
  11. Herbert, E. W. (1975). Smallpox inoculation in Africa. The Journal of African History16(4), 539-559. 
Advertisements

[OPEN QUESTION] Insect declines: Why aren’t we dead already?

One study on a German nature reserve found insect biomass (e.g., kilograms of insects you’d catch in a net) has declined 75% over the last 27 years. Here’s a good summary that answered some questions I had about the study itself.

Another review study found that, globally, invertebrate (mostly insect) abundance has declined 35% over the last 40 years.

Insects are important, as I’ve been told repeatedly (and written about myself). So this news begs a very important and urgent question:

Why aren’t we all dead yet?

This is an honest question, and I want an answer. (Readers will know I take catastrophic possibilities very seriously.) Insects are among the most numerous animals on earth and central to our ecosystems, food chains, etcetera. 35%+ lower populations are the kind of thing where, if you’d asked me to guess the result in advanced, I would have expected marked effects on ecosystems. By 75% declines – if the German study reflects the rest of the world to any degree – I would have predicted literal global catastrophe.

Yet these declines have been going on for apparently decades apparently consistently, and the biosphere, while not exactly doing great, hasn’t literally exploded.

So what’s the deal? Any ideas?

Speculation/answers welcome in the comments. Try to convey how confident you are and what your sources are, if you refer to any.

(If your answer is “the biosphere has exploded already”, can you explain how, and why that hasn’t changed trends in things like global crop production or human population growth? I believe, and think most other readers will agree, that various parts of ecosystems worldwide are obviously being degraded, but not to the degree that I would expect by drastic global declines in insect numbers (especially compared to other well-understood factors like carbon dioxide emissions or deforestation.) If you have reason to think otherwise, let me know.)


Sidenote: I was going to append this with a similar question about the decline in ocean phytoplankton levels I’d heard about – the news that populations of phytoplankton, the little guys that feed the ocean food chain and make most of the oxygen on earth, have decreased 40% since 1950.

But a better dataset, collected over 80 years with consistent methods, suggests that phytoplankton have actually increased over time. There’s speculation that the appearance of decrease in the other study may have been because they switched measurement methods partway through. An apocalypse for another day! Or hopefully, no other day, ever.


Also, this blog has a Patreon. If you like my work, consider incentivizing me to make more!

Caring less

Why don’t more attempts at persuasion take the form “care less about ABC”, rather than the popular “care more about XYZ”?

People, in general, can only do so much caring. We can only spend so many resources and so much effort and brainpower on the things we value.

For instance: Avery spends 40 hours a week working at a homeless shelter, and a substantial amount of their free time researching issues and lobbying for better policy for the homeless. Avery learns about existential risk and decides that it’s much more important than homelessness, say 100 times more, and is able to pivot their career into working on existential risk instead.

But nobody expects Avery to work 100 times harder on existential risk, or feel 100 times more strongly about it. That’s ridiculous. There literally isn’t enough time in the day, and thinking like that is a good way to burn out like a meteor in orbit.

Avery also doesn’t stop caring about homelessness – not at all. But as a result of caring so much more about existential risk, they do have to care less about homelessness (in any meaningful or practical sense) as a result.

And this is totally normal. It would be kind of nice if we could put a meaningful amount of energy in proportion to everything we care about, but we only have so much emotional and physical energy and time, and caring about different things over time is a natural part of learning and life.

When we talk about what we should care about, where we should focus more of our time and energy, we really only have one kludgey tool to do so: “care more”. Society, people, and companies are constantly telling you to “care more” about certain things. Your brain will take some of these, and through a complicated process, reallocate your priorities such that each gets an amount of attention that fits into your actual stores of time and emotional and physical energy.

But since what we value and how much is often considered, literally, the most important thing on this dismal earth, I want more nuance and more accuracy in this process. Introducing “consider caring less” into the conversation does this. It describes an important mental action and lets you describe what you want more accurately. Caring less already happens in people’s beliefs, it affects the world, so let’s talk about it.

On top of that, the constant chorus of “care more” is also exhausting. It creates a societal backdrop of guilt and anxiety. And some of this is good – the world is filled with problems and it’s important to care about fixing them. But you can’t actually do everything, and establishing the mental affordance to care less about something without disregarding it entirely or feeling like an awful human is better for the ability to prioritize things in accordance with your values.

I’ve been talking loosely about cause areas, but this applies everywhere. A friend describes how in work meetings, the only conversational attitude ever used is this is so important, we need to work hard on that, this part is crucial, let’s put more effort here. Are these employees going to work three times harder because you gave them more things to focus on, and didn’t tell them to focus on anything else less? No.

I suspect that more “care less” messaging would do wonders on creating a life or a society with more yin, more slack, and a more relaxed and sensible attitude towards priorities and values.

It also implies a style of thinking we’re less used to than “finding reasons people should care”, but it’s one that can be done and it reflects actual mental processes that already exist.


Why don’t we see this more?

(Or “why couldn’t we care less”?)

Some suggestions:

  • It’s more incongruous with brains

Brains can create connections easily, but unlike computers, can’t erase them. You can learn a fact by practicing it on notecards or by phone reminders, but can’t un-learn a fact except by disuse. “Care less” is requesting an action from you that’s harder to implement than “care more”.

  • It’s not obvious how to care less about something

This might be a cultural thing, though. Ways to care less about something include: mindfulness, devoting fewer resources towards a thing, allowing yourself to put more time into your other interests, and reconsidering when you’re taking an action based on the thing and deciding if you want to do something else.

  • It sounds preachy

I suspect people feel that if you assert “care more about this”, you’re just sharing your point of view, and information that might be useful, and working in good faith. But if you say “care less about that”, it feels like you know their values and their point of view, and you’re declaring that you understand their priorities better than them and that their priorities are wrong.

Actually, I think either “care more” or “care less” can have both of those nuances. At its best, “maybe care less” is a helpful and friendly suggestion made in your best interests. There are plenty of times I could use advice along the lines of “care less”.

At its worst, “care more” means “I know your values better than you, I know you’re not taking them seriously, and I’m so sure I’m right that I feel entitled to take up your valuable time explaining why.”

  • It invokes defensiveness

If you treat the things you care about as cherished parts of your identity, you may react badly to people telling you to care less about them. If so, “care less about something you already care about” has a negative emotional effect compared to “care more about something you don’t already care about”.

(On the other hand, being told you don’t have to worry about something can be a relief. It might depend on if you see the thought in question as a treasured gift or as a burden. I’m not sure.)

  • It’s less memetically fit

“Care more about X” sounds more exciting and engaging than “care less about Y”, so people are more likely to remember and spread it.

  • It’s dangerous

Maybe? Maybe by telling people to “care less” you’ll remove their motivations and drive them into an unrelenting apathy. But if you stop caring about something major, you can care more about other things.

Also, if this happens and harms people, it already happens when you tell people to “care more” and thus radically change their feelings and values. Unfortunately, a process exists by which other people can insert potentially-hostile memes into your brain without permission, and it’s called communication. “Care less” doesn’t seem obviously more risky than the reverse.

  • We already do (sometimes)

Buddhism has a lot to say on relinquishing attachment and desires.

Self-help-type things often say “don’t worry about what other people think of you” or “peer pressure isn’t worth your attention”, although they rarely come with strategies.

Criticism implicitly says “care less about X”, though this is rarely explicitly turned into suggestions for the reader.

Effective Altruism is an example of this when it criticizes ineffective cause areas or charities. This image implicitly says “…So maybe care more about animals on farms and less about pets,” which seems like a correct message for them to be sending.

Image from Animal Charity Evaluators.


Anyway, maybe “care less” messaging doesn’t work well for some reason, but existing messaging is homogeneous in this way and I’d love to see people at least try for some variation.


Photo taken at the 2016 Bay Area Secular Solstice. During an intermission, sticky notes and markers were passed around, and we were given the prompt: “If someone you knew and loved was suffering in a really bad situation, and was on the verge of giving up, what would you tell them?” Most of them were beautiful messages of encouragement and hope and support, but this was my favorite.


Crossposted on LessWrong.

This blog has a Patreon. If you like what you’ve read, consider giving it your support so I can make more of it.

2 extremely literal introspection techniques

Introspection literally means “to look inside”. Your eye is a camera made of meat – here are two ways to use your eyes to look at their own structure.

The Blue Field Entopic Phenomena

Stare up at a clear blue sky. (If no blue sky is available, for instance, if you’re in Seattle and it’s January, I was able to get a weaker version by putting my face close to this image instead. Your mileage may vary.)

BlueFieldGif

Animation of the phenomena. Made by Wikimedia user Unmismoobjectivo, under a CC BY-SA 3.0 license.

Notice tiny white spots with dark tails darting around your field of vision? You’re looking at your own immune system  – those are white blood cells moving in the capillaries in your retina. Normally transparent, they reflect blue light. The darker tails are build-ups of smaller red blood cells in the narrow capillaries, which are all but blocked by the large white blood cells.

This is clear enough that the speed at which the dots move can be used to accurately measure blood pressure in the retina. To do this, patients compare their blue field entopic phenomena to animated dots moving at various speeds. I wanted to find some calibrated gifs to try this at home, so if you see some, let me know.

On the other hand, if you see things that look like this all the time everywhere, it might be visual snow.

2. The Purkinje Tree

WARNING: A cell phone flashlight probably isn’t strong enough to damage your eyes, but especially if you try this with anything stronger than that, or if you have a condition that would make it very bad to accidentally shine a flashlight in your face, use your own judgement on proceeding.

Stand or lie down in a dark room.

Turn on your phone flashlight or a penlight, and hold it up against the side of your face.

Position yourself so that you’re looking into darkness, and the light beam passes just over the front of your eyes – you’re trying to get light to go across the surface of your pupil, but not directly into your eyes.

You might need to adjust the angle.

What you’re looking for is the Purkinje tree – shadows of the retinal blood vessels cast onto other parts of the retina. It was first seen by legendary Czech anatomist Jan Evangelista Purkynê, who also found Purkinje brain cells, sweat glands, and Purkinje fibers in the heart, and introduced the terms “blood plasma” and “protoplasm”.

YarlungTsangpoRiver.jpg

The Purkinje Tree reminded me of aerial photos of branching riverbeds, as in this NASA photo of the Yarlung Tsangpo River in Tibet. So look for a structure like this.

Once you see it, the image will vanish quickly – your brain already gets an image of the blood vessels on the retina, so it’s used to removing it from your perception and will adapt. If you waggle the light source gently at about one hertz (once per second), the image stays visible.


Happy new year from Eukaryote Writes Blog!

This blog has a Patreon. If you like what you’ve read, consider giving it your support so I can make more of it.

The children of 3,500,000,000 years of evolution

[NASA image of the winter solstice from space. Found here.]

This is the speech I gave during the “Twilight” portion of Seattle’s 2017 Secular Solstice. See also the incomparable Jai’s speech. A retrospective on our solstice and how we did it coming soon.


Eons ago, perhaps in a volcanic vent in the deep sea, under crushing pressure, in total darkness, chemicals came together in a process that made copies of itself. We’re not exactly sure how this happened – perhaps a simple tangle of molecules grabbed other nearby molecules, and formed them into identical tangles.

You know the story – some of those chemical processes made mistakes along the way. A few of those copies were better at copying themselves, so there were more of them. But some of their copies were subtly different too. And so it goes. This seems straightforward, but this alone is the mechanic of evolution, the root of the tree of life. Everything else follows.

So these tangles of protein or DNA or whatever-it-was in the deep sea, it keeps going. This chemical process grows a cell wall, DNA, a metabolism, starts banding together and eating sunlight.

By this point, the deep-sea vent itself had long since been swallowed up by tectonic plates, the rock recycled into magma beneath the ocean floor. But the process carried on.

Biologists even understand that if you let this process run for long enough, it starts going to war, and paying taxes, and curing diseases, and driving old beat-up cars, and lying awake at night wondering what it means to exist at all.

All of that? Evolution didn’t tell us to do that. Evolution is what gave you a fist-sized ball of neurons, and gave you the tools to reshape those neurons based on what you learned. And you did the rest.

Sure, evolution gave you some other things – hands for grabbing, a voice for communicating, a vague predilection for fat and sugar and other entities who are similar to you. But all of this is the output of a particular process – a long and unlikely chemical process for which you, the building blocks of your brain, your hands, your tastes, are a few of the results. None of this happened on purpose. In the eyes of the evolutionary tree of life, you can’t think about existing ‘for a greater reason’ beyond the result of this process. What would that mean? Does fusion ‘happen on purpose’? Does gravity work ‘for a greater reason’?

This might sound nihilistic. I think this has two lessons for us. First of all, when you and your friends are sitting in a diner eating milkshakes and french fries at 2 AM, as far as evolution gets any say in your life, you’re doing just fine.

But here’s the other thing – we’re a biological process. Apparently, we’re just what happens when you mix rocks and water together and then wait 3.5 billion years. Everything around us today, our lives, our struggles, nobody prepared us for this. It makes sense that there will be times when nothing makes sense. When your body or your brain don’t seem to be enough, well, we weren’t made for anything.

Nobody exists on purpose. There’s no promise that we’ll get to keep existing. There’s no assurance that we, as a species, will be able to solve our problems. Maybe one day we’ll run into something that’s just too big, and the tools evolution gave us won’t enough. It hasn’t happened yet, but what do we know? As far as we’re aware, we’re the only processes in the whole wide night sky that have ever come this far at all. We don’t have the luxury of examples or mentors to look to.

All we have are these tools, this earth, this process, these hands, these minds, each other. Nothing less and nothing more.


This blog has a Patreon. If you like what you’ve read, consider giving it a look.

How many neurons are there?

Image from NOAA, in the public domain.

Last updated on March 16, 2018. I just finished a large project trying to estimate that. I’ve posted it on its own page here. Here’s the abstract:

We estimate that there are between 10^23 and 10^24 neurons on earth. Most of this is distributed roughly evenly among small land arthropods, fish, and nematodes, or possibly dominated by nematodes with the other two as significant contenders. For land arthropods, we multiplied the apparent number of animals on earth by mostly springtail-sized animals, with some small percentage being from larger insects modeled as fruit flies. For nematodes, we looked at studies that provide an average number of nematodes per square meter of soil or the ocean floor, and multiplied them by the number of neurons in Caenorhabditis elegans, an average-sized nematode. For fish, we used total estimates of ocean fish biomass, attributed some to species caught by humans, and used two different ways of allocating the remaining biomass. Most other classes of animal contribute 10^22 neurons at most, and so are unlikely to change the final analysis. We neglected a few categories that probably aren’t significant, but could conceivably push the estimate up.

Using a similar but less precise process based on evolutionary history and biomass over time, we also estimate that there have been between 10^32 and 10^33 neuron-years of work over the history of life, with around an order of magnitude of uncertainty.

Male dairy calves, male chicks, and relative suffering from animal foods

Or: Do “byproduct” animals of food animal production significantly affect estimates of comparative suffering caused by those foods? No.

[Image adapted from this image by Flickr user Sadie_Girl, under a CC BY-SA 2.0 license.]

See, relatedly: What happens to cows in the US?

Short version

There’s a shared belief in animal-welfare-oriented effective altruism that eggs and chicken meat cause a great deal more suffering than beef or dairy (1). You can make big strides towards reducing the amount of suffering caused in your diet by eating fewer eggs and chicken, even if you don’t go fully vegetarian or vegan.

Julia Galef, Brian Tomasik, and William MacAskill have made different versions of this calculation, with different metrics, and have come to the same conclusion. These three calculations include only the animal used directly for production. (Details about the calculations and my modifications are in the long version below.) But the production of several kinds of animal product require bringing into existence animals that aren’t used for that product – like male calves born to lactating dairy cows, or male chicks born when producing egg-laying hens. I wondered if including these animals would significantly change the amount of suffering in various animal foods.

It turns out that even accounting for these other animals indirectly created during production, the amount of suffering relative to other animal foods doesn’t change very much. If you buy the premises of these quantitative ethical comparisons, beef and dairy make so much product using so few animals that they’re still 1-3 orders of magnitude better than eggs or chicken. Or rather, the message of “eat less chicken” and “if you’re going to eat animal products, eat dairy and beef” still makes sense even if we account for the maximum number of other animals created incidental to production of each food animal. I’m going to call these the “direct and incidental animals” (DIA) involved in a single animal’s worth of product.

The question is complicated by the fact that “incidental” animals still go into another part of the system. Day-old male chicks are used for pet and livestock food, and male dairy calves are raised for meat.

Given that these male calves are tied to dairy production, it seems unlikely that production of dairy and meat is what it would be if they weren’t connected. For instance, if there is less demand for dairy and thus fewer male dairy calves, it seems like one of the following should happen:

  1. No change to meat calf supply, less meat will be produced (DIA estimates seem correct)
  2. Proportionally more meat calves will be raised (original estimates seem correct)
  3. Something between the above (more likely)

Reframed: It depends whether demand for dairy increases the meat supply and makes it less profitable to raise meat cows, or whether demand for meat makes it more profitable to raise dairy cows, or both. I’m not an economist and don’t go into which one of these is the case. (I tried to figure this out and didn’t make much headway.) That said, it seems likely that the actual expected number of animal lives or days of suffering is somewhere between the initial numbers and my altered values for each source.

The most significant change I find from the original findings suggest that meat cows cause a fair bit more suffering over a longer period of time than the original calculations predict, only if demand for meat is significantly propping up the dairy industry. But even if that’s true, the suffering caused by beef is a little smaller than that caused by pork, and nowhere near as much as smaller animals.

Modifications to other estimates including direct and incidental animals (DIA)

Tomasik’s original estimate DIA Tomasik’s estimate Galef’s orginal estimate DIA Galef’s estimate
Milk 0.12 equivalent days of suffering caused per kg demanded 0.14 equivalent days of suffering caused per kg demanded 0.000057 max lives per 1000 calories of milk 0.00013 max lives per 1000 calories of milk
Beef 1.9 max equivalent days of suffering caused per kg demanded 4.74 max equivalent days of suffering caused per kg demanded 0.002469 max lives per 1000 calories 0.0029 max lives per 1000 calories
Eggs 110 equivalent days of suffering caused per kg demanded 125 equivalent days of suffering caused per kg demanded 0.048485 lives per 1000 calories 0.048485 lives per 1000 calories

That’s basically it. For a little more info and how I came to these conclusions, read on.

Longer version

On the topic of effectively helping animals, one thing I’ve heard a few times is that eating dairy and beef aren’t terribly harmful, since they come from such large animals that a serving of beef or milk is a very small part of the output of the animal. On the other hand, chickens are very small – an egg is a day’s output of an animal, and a family can eat an entire chicken in one dinner. Compare that with the fact that most chickens are raised in extremely unnatural and unpleasant conditions, and you have a framework for directly comparing the suffering that goes into different animal products.

This calculation has been made by three people I’m aware of – Brian Tomasik on his website, William MacAskill in his book Doing Good Better, and Julia Galef on her blog. The organization One Step for the Animals also recommends people stop eating chickens, on these grounds, but I didn’t find a similar breakdown on their website after a few minutes of looking. It’s still worth checking out, though. (Did you know chicken consumption, in pounds/year, has surpassed beef consumption and is still climbing, but only over the last 20 years?)

Galef compares calories per life. She includes the male chicks killed for each egg-laying hen.

Tomasik looks at “days of suffering caused per kg demanded”.

Macaskill briefly examines three factors: the number of animal years and lives that go into a year of eating in the average omnivorous American diet, and also numerical “quality of life” estimates from Bailey Norwood. (He doesn’t combine these factors numerically so much as use them to establish a basis for recommending people avoid chicken. I didn’t do an in-depth analysis of his, but safe to say that like the others, adding in other animal lives doesn’t seem to change his conclusions significantly.)

With pigs and meat chickens, the case is straightforward – both sexes are raised for meat, and suppliers breed animals to sell them and retain enough to continue breeding. The aged animals are eventually slaughtered as meat as well.

But only female hens lay eggs. Meat chickens and egg chickens raised at scale in the USA are two different breeds, so when a breeder produces laying hens, they wind up with more male chicks than are needed for breeding. Similarly, dairy cows have to give birth to a calf every season they produce milk. The average dairy cow gives 2.4 calves in her lifetime, and slightly less than 1.2 of those are male. The male egg chicks and male dairy calves are used for meat.

Aged dairy cows and egg-laying chickens are also sold as meat. “Spent hens” that are no longer commercially profitable, at 72 weeks old, are sold for ‘processed chicken meat’. (Other sources claim pet food or possibly just going to landfills. Pet food sounds reasonable, but landfills seem unlikely to me, at least for large operations.) There aren’t as many of these as either cows or chickens raised directly for meat, so they’re a comparatively small fraction, but they’re clearly still feeding into the meat system.

🐔

When talking about this, we quickly run into some economic questions, like “perhaps if the demand for dairy dropped, the meat industry would start raising more calves for meat instead?”

My intuition says it ought to shake out one way or the other – either decreasing demand for dairy cows results in the price of meat going up, or decreasing demand for meat results in demand for dairy cows going down.

In the egg case, male chicks aren’t literally put in a landfill, they’re ground and sold for pet food. Without this otherwise unused source of protein, would pet food manufacturers increase demand for some other kind of meat? It seems possible that both this would happen and that the price of pet food would increase. Then, maybe less would be bought to make up for the difference, at least in the long term – cheap pet food must be somewhat inelastic, at least in the short term?

My supply and demand curves suggest that both demand should decline and price should increase. That said, we’re leaving the sphere of my knowledge and I don’t know how to advise you here. For the moment, I’m comfortable folding in both animals produced in the supply chain for a product, and animals directly killed or used for a product. But based on the economic factors above, these still don’t equate to “how many animal lives / days are expected to be reduced in the long term by avoiding consumption of a given product.”

At the most, though, dairy cows bring an extra 1.2 meat cow into existence, meat cows bring an extra .167 dairy cow,  and each egg-laying hen brings an extra 1 male chicken that is killed around the first day. These are the “direct and incidental animals” created for each animal directly used during productive.

 

Some notes on the estimates below:

I ignored things like fish and krill meal that go into production. Tomasik notes that 37% of the global fish harvest (by mass) is ground and used for animal feed for farmed fish, chickens, and pigs. But this seems to be mostly from wild forage fish, not farmed fish, and wild populations are governed by a different kind of population optimum – niches. We’d guess that each fish removed from the environment frees up resources that will be eaten by, on average, one new fish. (Of course, populations we’re fishing seem to be declining, so something is happening, but it’s certainly not one-to-one.)

I also only looked at egg-laying chickens, meat cows, and dairy cows. This is because pork and other industries aren’t sex-segregated – all babies born are raised for the same thing. A few will be kept aside and used to produce more babies, but even the breeding ones will eventually be turned into meat. The amount of days these animals live probably affect Tomasik’s calculations somewhat, but the breeding animals are still the minority.

I also didn’t include a detailed analysis because if you’re concerned about animal welfare, you probably already don’t eat veal. (I’m going to assert that if you want to eat ethically treated food, avoid a meat whose distinguishing preparation characteristic is “force-feed a baby”.) Veal is a byproduct of the dairy industry, but a minority of the calves. Foie gras does have a multiplier effect because female geese don’t fatten up as much, and are killed early, so for each goose turned into foie gras, another goose is killed young.

Old dairy cows and laying hens are used for meat, but it’s a minority of the meat production. I didn’t factor this in. See What happens to cows in the US for more on cows.

Modifications to other estimates including direct and incidental animals (DIA)

Tomasik’s original estimate DIA Tomasik’s estimate Galef’s orginal estimate DIA Galef’s estimate
Milk 0.12 equivalent days of suffering caused per kg demanded 0.14 equivalent days of suffering caused per kg demanded 0.000057 max lives per 1000 calories of milk 0.00013 max lives per 1000 calories of milk
Beef 1.9 max equivalent days of suffering caused per kg demanded 4.74 max equivalent days of suffering caused per kg demanded 0.002469 max lives per 1000 calories 0.0029 max lives per 1000 calories
Eggs 110 equivalent days of suffering caused per kg demanded 125 equivalent days of suffering caused per kg demanded 0.048485 lives per 1000 calories 0.048485 lives per 1000 calories

DIA modifications to Tomasik’s estimate

(Days of equivalent suffering / kg)

To adjust this estimate, I added the extra “equivalent days of suffering caused per kg demanded” for the other animals:

Egg-laying chickens
(4 suffering per day of life in egg-laying chickens * 501 days of life) + 1 * (3 suffering per days of life in meat chickens * 1 day of life) / 16 kg of edible product over life of egg-laying chicken = 125 max equivalent days of suffering caused per kg demanded (vs 110)

Dairy cows
(2 suffering per day of life in milk cows * 1825 days of life) + 1.2 * (1 suffering per day of life in meat cows * 395 days of life) / 30000 kg of edible product over life of dairy cow = 0.14 max equivalent days of suffering caused per kg demanded (vs 0.12)

Meat cows
(1 suffering per day of life in meat cows * 395 days of life) + 0.167 * (2 suffering per day of life in dairy cows * 1825 days of life) / 212 kg of edible product over life of meat cow = 4.74 max equivalent days of suffering caused per kg demanded (vs 1.9)

The meat cow number is the only very different one here.

DIA modifications to Galef’s estimate

I adjusted this by adding other lives to Galef’s estimate of lives per 1000 calories:

Egg-laying chicken
Galef included this in her calculation of 0.048485 lives per 1000 calories of eggs.

Dairy cows
[0.000057 lives per 1000 calories of milk] * 2.2 = 0.00013 max lives per 1000 calories of milk
[0.000075 lives per 1000 calories of cheese] * 2.2 = 0.00017 max lives per 1000 calories of cheese

Meat cows
[0.002469 lives per 1000 calories of beef] * 1.167 = 0.0029 max lives per 1000 calories of beef

Other economic notes

I’m hoping someone who knows more here will be able to make use of the information I found.

The number of meat cows in the US has been broadly decreasing since 1970. The number of dairy cows has also been decreasing since at least 1965, but dairy consumption is increasing, because those cows are giving far more milk.

When dairy prices drop, dairy farmers are known to kill some of their herds and sell them for meat, leading to a drop in meat prices.

We would also expect dairies and beef farms to compete with each other for some of the same resources, like land and feed.

A friend wondered whether dairy steers are much smaller than beef cows, so if shifting the same volume of meat production to these steers would mean more animal lives. It turns out that dairy steers and beef cows are about the same weight at slaughter.


(1) With fish perhaps representing much more suffering than eggs or chickens, and other large meat sources like pigs somewhere in the middle.)


 

This blog has a Patreon. If you like what you’ve read, consider giving it a look.

An invincible winter

[Picture in public domain, taken by Jon Sullivan]

Early September brought Seattle what were to be some of the hottest days of the summer. For weeks, people had been turning on fans, escaping to cooler places to spend the day, buying out air conditioners, which most of the city didn’t own. I cowered in my room with an AC unit on loan from a friend lodged in the window, only going out walking when the sun had set.

That week, Eastern Washington was burning. It does that every summer. But this year, a lot of Eastern Washington was burning. Say it with me – 2017 was one of the worst fire years on record. That week, the ash from the fires drifted over Seattle. You smelled smoke everywhere in the city. The sky was gray. At sunrise and sunset, the sun was blood-red. One day, gray ash drifted down from the sky, the exact size of snowflakes. It dusted the cars and kept falling through the afternoon.

That day, people said the weather was downright apocalyptic. They weren’t entirely wrong.

Many people aren’t clear on what exactly a nuclear winter is. The mechanic is straightforward. When cities burn in the extreme heat of a nuclear blast – and we do mean cities, plural, most nuclear exchange scenarios involve multiple strikes for strategic reasons – they turn into soot, and the soot floats up. If enough ash from burned cities reaches the stratosphere, the upper layer of the atmosphere, it stays there for a long time. The ash clouds blot out the sun, cool the earth, and choke off the growth of crops. Within weeks, agriculture grinds to a halt.

There’s a lot of uncertainty over nuclear winter. But by one estimate, the detonation of less than 1% of the world’s nuclear arsenal – a fairly small war – could drop the temperatures by five degrees Celsius, and warm up slowly again over twenty years. The ozone layer would thin. Less rain would fall. Two billion people would starve.

On Tuesday and Wednesday that week, the temperature was predicted to reach over 100 degrees. It didn’t. The particulates in the air blocked enough of the sun’s heat that it barely hit the 90’s. Pedestrians didn’t quite breathe easier, but did sweat less. Our own tiny, toy model taste of a nuclear apocalypse.


I’d been feeling strange for the last few weeks, unrelatedly, and sitting at my desk for hours, my mind did a lot of wandering. I hoped things would be looking up – I’d just gotten back from an exciting conference with good friends, and also from seeing the solar eclipse.

I’d made the pilgrimage with friends. We drove for hours, east across the mountains the week before they burned. We crossed the Colombia River into Oregon, and finally, drove up a winding dirt road to a wide clearing with a small pond. I studied for the GRE in the shadows of dry pines. We played tug-of-war with the crayfish and watched the mayflies dance above the pond. The morning of, the sun climbed in the sky, and I had never appreciated how invisible the new moon is, or how much light the sun puts out – even when it was half-gone, we still had to peer through black plastic glasses to confirm that something had changed. But soon, it became impossible not to notice.

I kept thinking about what state I would have been thrown into if I hadn’t known the mechanism of an eclipse – how deep the pits of my spiritual terror could go. Whether it would be limited by biology or belief. As it is, it was only sort of spiritually terrifying, in a good way. The part of my brain that knew what was happening had spread that knowledge to all the other parts well, so I could run around in excitement and really appreciate the chill in the air, the eerie distortion of shadows into slivers, and finally, the moon sealing off the sun.

The solar corona.

The sunset-colored horizon all around the rim of the sky.

Stars at midday.

We left after the daylight returned, but while the moon was still easing away, eager to beat the crowds back to the city. I thought about the mayflies in the pond, and their brief lives – the only adults in hundreds of generations to see the sun, see the stars, and then see the sun again.

I thought something might shake loose in my brain. Things should have been looking up, but the adventures had scarcely touched the inertia. Oh, right, I had also been thinking a lot about the end of the world.

I wonder about the mental health of people who work in existential risk. I think it must vary. I know people who are terrified on a very emotional and immediate level, and I know people who clearly believe it’s bad but don’t get anxious over it, and aren’t inclined to start. I can’t blame them. I used to be more of the former, and now I’m not sure if it’s eased up or if I’m just not thinking about things that viscerally scare me anymore. I’m not sure the existential terror can tip you towards something you weren’t predisposed to. In my case, I don’t think the mental fog was from it. But the backdrop of existential horror certainly lent it an interesting flavor.


It’s late October now. I’ve pulled out the flannel and the space heater and the second blanket for the bed. When I went jogging, my hands got numb. I don’t mind – I like autumn, I like the descent into winter, heralded by rain and red leaves and darkness, and the trappings of horror and then of the harvest. Peaches in the summertime, apples in the fall. The seasons have a comforting rhyme to them.

That strange inertia hasn’t quite lifted, but I’m working on it. Meanwhile, the world continues to cant sideways. When we arranged the Stanislav Petrov day party in Seattle this year, to celebrate the day a single man decided not to start World War 3, I wondered if we should ease up on the “nuclear war is a terrifying prospect” theme we had laid on last year. I thought that had probably been on people’s minds already.

So geopolitical tensions are rising, and have been rising. The hemisphere gets colder. Not quite out of nostalgia, my mind keeps flickering back to last month, to not-quite-a-hundred-degrees Seattle, to the red sun.

There’s a beautiful quote from Albert Camus: “In the midst of winter, I found there was, within me, an invincible summer.” That Tuesday, like the momentary pass of the moon over the sun in mid-day, in the height of summer, I saw the shadow of a nuclear winter.



For a more detailed exploration of the mechanics of nuclear winter and why we need more research, look at this piece from Global Risk Research Network.

What do you do if a nuclear blast is going to go off near you? Read this piece. Maybe read it beforehand.

What do you do if you don’t want a nuclear blast to go off near you? The Ploughshares Fund is one of the canonical organizations funding work on reducing risks from nuclear weapons. You might also be interested in Physicians for Social Responsibility and the Nuclear Threat Initiative.

This blog has a Patreon. If you like what you’ve read, consider giving it a look.

What happens to cows in the US?

(Larger version. Image released under a CC-BY SA 4.0 license.)

There are 92,000,000 cattle in the USA. Where do they come from, what are they used for, and what are their ultimate fates?

I started this as part of another project, and was mostly interested in what happens to the calves of dairy cows. As I worked, though, I was astonished that I couldn’t easily find this information laid out elsewhere, and decided to publish it on its own to start.


Note: Numbers are not exactly precise, and come from a combination of raw data from 2014-2016 and guesswork. Also, the relative sizes on the graph (of arrows and boxes) are not accurate – they’re hand-sized based on eyeballing the numbers and the available settings in yEd. I’m a microbiologist, not a graphic designer, what do you expect? If that upsets you, try this version, which is also under a CC-BY SA 4.0 license. If you want to make a prettier or more accurate version, knock yourself out.

There are some changes from year to year, which might account for small (<5%) discrepancies. I also tried to generalize from practices used on larger farms (e.g. <1,000 cow operations), which make up a minority of the farms, but house a majority of the cattle.

In the write-up, I try to clearly label “male cattle” and “female cattle” or “female cows” when relevant, because this confused me to no end when I was gathering data.


Let’s start with dairy cows. There are 9,267,000 female cows actively giving milk this season (“milk cows”) in the USA. For a cow to give milk, it has to be pregnant and give birth. That means that 9,267,000 calves are born to milk cows every year.

Almost half of these are female. Most milk cows are impregnated at around 2 years with artificial insemination. There’s a huge market in bull sperm, and 5% of the sperm sold in the US is sex-selected, meaning that 90% of the sperm in a given application is one sex. Dairies are mostly interested in having more female cows, so it seems like 2.25% of the milk cow calves that would have been male (because of chance) are instead female (because of this technology).

The female calves almost all go back into being milk cows. The average dairy cow has 2.4 lactation periods before she’s culled, so she breeds at a little over her replacement rate. I’m actually still not 100% certain where that 0.2-nd female calf goes, but dairies might sell extra females to be beef cattle along with the males.

The 2,755,000 milk cows that are culled each year are generally turned into lean hamburger. They’re culled because of infection or health problems, or age and declining milk volume. They’re on average around 4 years old. (Cows can live to 10 years old.)

Male calves are, contrary to some claims, almost never just left to die. The veal industry exists, in which calves are kept in conditions ranging from “not that different from your average cow’s environment” to “absolutely terrible”, and are killed young for their meat. It seems like between 450,000 and 1,000,000 calves are killed for veal each year, although that industry is shrinking. I used the 450,000 number.

Some of the male calves are kept and raised, and their sperm is used to impregnate dairy cows. This article describes an artificial insemination company, which owns “1,700 dairy and beef bulls, which produce 15 million breeding units of semen each year.” That’s about 1 in 1,000, a minuscule fraction of the male calves.

The rest of those male calves, the dairy steers, are sold as beef cattle. After veal calves, we have 3,952,000 remaining male calves to account for. They make up 14% of the beef supply of the 30,578,000 cattle slaughtered annually. From those numbers, we’d guess that 4,060,000 dairy steers are killed yearly – and that’s close enough to the above estimate that I think we’ve found all of our male calves. That’s only a fraction of the beef supply, though – we’ll now turn our attention to the beef industry.

We imported 1,980,000 cattle from Canada and Mexico in 2015, mostly for beef. We also export a few, but it’s under 100,000, so I left if off the chart.

Most beef cows are bred on calf-cow operations, which either sell the calves to feedlots or raises calves for meat directly. To replace their stock, they either keep some calves to breed more cows, or buy them from seedstock operations (which sell purebred or other specialty cattle.) Based on the fact that 30,578,000 cattle are slaughtered annually (and we know how some of them are already killed), and that cattle are being bred at the replacement rate, it seems like each year, calf-cow operations generate 21,783,000 new calves.  There’s a lot of variation in how beef cattle are raised, which I’m mowing over in this simplified graph. In general, though, they seem to be killed at between 1.5 and 3 years old.

Of course, calf-cow operations also need breeding cattle to keep the operation running, so while some of those cows are raised only for meat, some are also returned to the breeding pool. (Seedstock operations must be fairly small – under 3% of cattle in the US are purebred – so I think calf-cow operations are the majority worth examining.) Once they’re no longer productive breeders, breeding animals are also culled for beef.

This article suggests that 14-15% of cows are culled annually, I think on cow-calf operations that raise cows for slaughter themselves (although possibly only on smaller farms). If that’s the case, then each year, they must create about 14.5% more calves than are used raised only for meat. This suggests that 21,783,000 cattle born to calf-cow operations are raised for meat, and the remaining 2,759,000 calves which will go back into breeding each year. These will mostly be females – there seems to be a 1:15-25 ratio of males to females on calf-cow operations – so disproportionately more males will go directly to beef.

By adding up the bottom numbers, we get ~30,600,000 cattle slaughtered per year. In terms of doing math, this is fortunate, because we also used that number to derive some of the fractions therein. We can also add up the top numbers to get 33,030,000 born, which is confusing. If we take out the 450,000 veal calves and the 1,980,000 imported calves, it drops back to the expected value, which I think means I added something together incorrectly. While I’m going to claim this chart and these figures are mostly right, please do let me know if you see holes in the math. I’m sure they’re there.


“Wow, Georgia, I’m surprised, I really thought this was going to veer off into the ethics of the dairy industry or something.”

Ha ha. Wait for Part 2.

This blog has a Patreon! If you like what you’ve read, consider checking it out.

Diversity and team performance: What the research says

(Photo of group of people doing a hard thing from Wikimedia user Rizimid, CC BY-SA 3.0.)

This is an extended version (more info, more sources) version of the talk I gave at EA Global San Francisco 2017. The other talk I gave, on extinction events, is  here. Some more EA-focused pieces on diversity, which I’ve read but which were assembled by the indomitable Julia Wise, are:

Effective altruism means effective inclusion

Making EA groups more welcoming

EA Diversity: Unpacking Pandora’s Box

Keeping the EA Movement welcoming

How can we integrate diversity, equity, and inclusion into the animal welfare movement?

Pitfalls in diversity outreach


There are moral, social, etc. reasons to care about diversity, all of which are valuable. I’m only going to look at one aspect, which is performance outcomes. The information I’m drawing from here are primarily meta-studies and experiments in a business context.

Diversity here mostly means demographic diversity (culture, age, gender, race) as well as informational diversity – educational background, for instance. As you might imagine, each of these has different impacts on team performance, but if we treat them as facets of the same thing (“diversity”), some interesting things fall out.

(Types of diversity which, as far as I’m aware, these studies largely didn’t cover: class/wealth, sexual orientation, non-cis genders, disability, most personality traits, communication style, etc.)

Studies don’t show that diversity has an overall clear effect, positive or negative, on the performance of teams or groups of people. (1) (2) The same may also be true on an organizational level. (3)

If we look at this further, we can decompose it into two effects (one where diversity has a neutral or negative impact on performance, and one where it has a mostly positive impact): (4) (3)

Social categorization

This is the human tendency to have an ingroup / outgroup mindset. People like their ingroup more. It’s an “us and them” mentality and it’s often totally unconscious. When diversity interacts with this, the effects are often – though not always – negative.

Diverse teams tend to have:

  • Lower feelings of group cohesion / identification with group
  • Worse communication (3)
  • More conflict (of productive but also non-productive varieties) (also the perception of more conflict) (5)
  • Biases

A silver lining: One of these ingrouping biases is the expectation that people more similar to us will also think more like us. Diversity clues us into diversity of opinions. (6) This gets us into:

Information processing 

— 11/9/17 – I’m much less certain about my conclusions in this section after further reading. Diversity’s effects on creativity/innovation and problem-solving/decision-making have seen mixed results in the literature. See the comments section for more details. I now think the counterbalancing positive force of diversity might mostly be as a proxy for intellectual diversity. Also, I misread a study that was linked here the first time and have removed it. The study is linked in the comments. My bad! —

Creative, intellectual work. (7) Diversity’s effects here are generally positive. Diverse teams are better at:

  • Creativity (2)
  • Innovation (9)
  • Problem solving. Gender diversity is possibly more correlated than individual intelligence of group members. (Note: A similarly-sized replication failed to find the same results. Taymon Beal kindly brought this to my attention after the talk.) (10)

Diverse teams are more likely to discuss alternate ideas, look at data, and question their own beliefs.


This loosely maps onto the “explore / exploit” or “divergent / convergent” processes for projects. (2)

    1. Information processing effects benefit divergent / explore processes.
    2. Social categorization harms convergent / exploit processes.

If your group is just trying to get a job done and doesn’t have to think much about it, that’s when group cohesiveness and communication are most important, and diversity is less likely to help and may even harm performance. If your group has to solve problems, innovate, or analyze data, diversity will give you an edge.


How do we get less of the bad thing? Teams work together better when you can take away harmful effects from social categorization. Some things that help:

    1. The more balanced a team is along some axis of diversity, the less likely you are to see negative effects on performance. (12) (7) Having one woman on your ten-person research team might not do much to help and might trigger social categorization. If you have five women, you’re more likely to see benefits.
    2. Remote teams are less biased (w/r/t gender). Online teams will be less prone to gender bias.
    3. Time. Obvious diversity becomes less salient to a group’s work over time, and diverse teams end up outperforming non-diverse teams. (13) (6) Recognition of less-obvious cognitive differences (e.g. personality and educational diversity) increases over time. As we might hope, the longer a group works together, the less surface-level differences matter.

This article has some ideas on minimizing problems from language fluency, and also for making globally dispersed teams work together better.


How do we get more of the good thing? Diversity is a resource – more information and cognitive tendencies. Having diversity is a first step. How do we get more out of it?

    1. At least for age and educational diversity, high need for cognition. This is the drive of individual members to find information and think about things. (It’s not the same as, or especially correlated to, either IQ or openness to experience (1)).

Harvard Business Review suggests that diversity triggers people to stop and explain their thinking more. We’re biased towards liking and not analyzing things we feel more comfortable with – the “fluency heuristic.” (14) This is uncomfortable work, but if people enjoy doing it, they’re more likely to do it, and get more out of diversity.

But need for cognition is also linked with doing less social categorization at all, so maybe diverse groups with high levels of this just get along better or are more pleasant for all parties. Either way, a group of people who really enjoy analyzing and solving problems are likely to get more out of diversity.

2) A positive diversity mindset. This means that team members have an accurate understanding of potential positive effects from diversity in the context of their work. (4) If you’re working in a charity, you might think that the group you might assign to brainstorming new ways to reach donors might benefit from diversity more than the group assigned to fix your website. That’s probably true. But that’s especially true if they understand how diversity will help them in particular. You could perhaps have your team brainstorm ideas, or look up how diversity affects your particular task. (I was able to find results quickly for diversity in fundraising, diversity in research, diversity in volunteer outreach… so there are resources out there.)


Again, note that diversity’s effect size isn’t huge. It’s smaller than the effect size of support for innovation, external and internal communication, vision, task orientation, and cohesion – all these things you might correctly expect correlate with performance more than diversity (8). That said, I think a lot of people [at EA Global] want to do these creative, innovative, problem-solving things – convince other people to change lives, change the world, stop robots from destroying the earth. All of these are really important and really hard, and we need any advantage we can get.


  1. Work Group Diversity
  2. Understanding the effects of cultural diversity in teams: A meta-analysis of research on multicultural work groups
  3. The effects of diversity on business performance: Report of the diversity research network
  4. Diversity mindsets and the performance of diverse teams
  5. The biases that punish racially diverse teams
  6. Time, Teams, and Task Performance
  7. Role of gender in team collaboration and performance
  8. Team-level predictors of innovation at work: A comprehensive meta-analysis spanning three decades of research
  9. Why diverse teams are smarter
  10. Evidence of a collective intelligence factor in the performance of human groups
  11. When and how diversity benefits teams: The importance of team members’ need for cognition
  12. Diverse backgrounds and personalities can strengthen groups
  13. The influence of ethnic diversity on leadership, group process, and performance: an examination of learning teams
  14. Diverse teams feel less comfortable – and that’s why they perform better