Are viruses alive?

Whether viruses are alive or not is a silly question. Here’s why.

(I make a handful of specific claims here that I expect are not universally agreed upon. In the spirit of tagging claims and also as a TL;DR, I’ll list them.)

  • Whether things are alive or not is a categorization issue.
  • The criteria that living organisms should be made of cells is a bad one, even excluding viruses.
  • Some viruses process energy.
  • A virus alone may not process energy, but a virus-infected cell does, and meets all criteria for life.
  • Viruses are not an edge case in biology, they’re central to it.
  • The current criteria for life seem to be specifically set up to exclude viruses.
phage
Bacteriophage infecting a cell. || Electron micrograph by Dr. Graham Beards, CC BY-SA 3.0

What does it mean to be alive?

Whether viruses are alive is a semantic issue. It isn’t a question about reality, in the same way that “how many viruses are there?” or “do viruses have RNA?” are questions about reality. It’s a definitional question, and whether they fall in the territory of “alive” or not depends on where you draw the borders.

Fortunately, scientists tentatively use a standard set of borders. This is not exactly set in stone, but it’s an outset. In intro biology in college, I learned the following 7 characteristics (here, copied from Wikipedia)*:

  1. Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature

  2. Organization: being structurally composed of one or more cells — the basic units of life

  3. Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life.

  4. Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter.

  5. Adaptation: the ability to change over time in response to the environment. This ability is fundamental to the process of evolution and is determined by the organism’s heredity, diet, and external factors.

  6. Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis.

  7. Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms.

The simple answer

Viruses meet all of the criteria for living things, except 2) and maybe 3).

The complicated answer

For the complicated answer, let’s go a level deeper.

Simply put, criterion 2) states that living things must be made of cells.

Criterion 3) states that living things must metabolize chemical energy in order to power their processes.

Are viruses made of cells?

Definitely not.

Okay, here’s what I’ve got. I think 2) is a bad criterion. I think that criteria for living things should not be restricted to earth *, and therefore not restricted to our phylogenetic history. Cells are a popular structure on earth, but if we go to space and find large friendly aliens that are made of proteins, reproduce, evolve, and have languages, we’re not just going to call them “non-living” because they run on something other than cells. Even if that definition is useful up until that point, we’d change it after we found those aliens, suggesting that it wasn’t a good criterion in the first place either.

(Could large aliens not be made out of cells? Difficult to say – multicellularity has been a really, really popular strategy here on earth, having evolved convergently at least 25 times. But cells as we know them only evolved once or twice. Also, it’s not clear to what degree convergent evolution applies to things outside of our particular evolutionary history, because n=1.)

So no, viruses don’t meet criterion 2), although the importance of criterion 2) is debatable.

Do viruses process energy?

What about criterion 3)? Do viruses process energy? Kind of.

Let’s unpack “processing energy.” Converting one kind of chemical energy to another is pretty generic. In bacteria and eukaryotes, what does that look like?

metabolicpathways
Some metabolic pathways used by cellular life. Large version.

Go ahead. Enlarge it. Look around. Contemplate going into biochemistry. Here’s where it starts to get complicated.

One of the major energy sources in cells is converting adenosine triphosphate (ATP) into adenosine diphosphate (ADP). This transformation powers so much cellular processes in all different organisms that it’s called the currency of life.

Bacteriophage T4 encodes an ATP→ADP-powered motor. It’s used during the virus’ reproduction, to package DNA inside nascent virus heads.

Some viruses of marine cyanobacteria encode various parts of the electron transport chain, the series of motors that pump protons across membranes and create a gradient that results in the synthesis of ATP. They encode these as a sort of improvement on the ones already present in the hosts.

Do those viruses process chemical energy? Yes. If you’re not convinced, ask yourself: Is there some other pathway you’d need to see before you consider a virus to encode a metabolism? If so, are you absolutely certain that we will never find such a virus? I don’t think I would be.

Wait, you may say. Sure, the viruses encode those and do those when infecting a host. But the viruses themselves don’t do them.

To which I would respond: A pathogenic bacterial spore is, basically, metabolically inert. If it nestles into a warm, nutrient-rich host, it blossoms into life. Our understanding of living things includes a lot of affordance for stasis.

By the same token, a virus is a spore in stasis. A virus-infected cell meets all the criteria of life.

(I think I heard this idea from Lindsay Black’s talk at the 2015 Evergreen Bacteriophage meeting, but I might be misremembering. The scientists there seemed very on-board with the idea, and they certainly have another incentive to claim that their subjects are alive, which is that studying living things sounds cooler than studying non-living things – but I think the point is still sound.)

Do we really want only some viruses count as alive?

To summarize, cells infected by T4 or some marine cyanophages – and probably other viruses – meets all of the criteria of life.

It seems ridiculous to include only those viruses in the domain of ‘life’, and not others that don’t include those chemical processes. Viruses have phylogeny. Separating off some viruses that are alive and some that aren’t is pruning branches off of the the evolutionary tree. We want a category of life that carves nature at its joints, and picking only some viruses does the opposite of that.

Wait, it gets more complicated. Some researchers have proposed giant viruses as a fourth domain of life (alongside the standard prokaryotes, eukaryotes, and archaea.) You’ll note that it’s giant viruses, and not all the viruses. That’s because viruses probably aren’t monophyletic. Hyperthermophilic crenarchaea phages, in addition to being a great name for your baby, share literally no genes with any other virus. Some other viruses have only extremely distant genetic similarities to others, which may have been swapped in by accident during past infections. This is not terribly surprising – we know that parasites have convergently evolved perhaps thousands of times. But it certainly complicates the issue of where to put viruses in the tree.

Viruses are not just an edge case

When people talk about the criteria of life, they tend to consider viruses as an edge case, a weird outlier. This is misleading.

standardview
The standard view of life
cosmopolitanview
A more cosmopolitan view.

Worldwide, viruses outnumber cells 10 times over. They’re not an edge case in biology – by number of individuals, or amount of ongoing evolution, they’re most of biology. And it’s rather suspicious that the standard criteria for life seem to be set up to include every DNA-containing evolving organism except for viruses. If we took out criteria 2) and 3), what else would that fold in? Maybe prions? Anything else?

Accepting that ‘life’ is a word that tries to draw out a category in reality, why do we care about that category? When we ask “is something alive?”, here are some questions we might mean instead.

  • Is something worth moral consideration? (Less than a bacteria, if any.)
  • Should biologists study something? (A biologist is much more suited to study viruses than a chemist is.)
  • Does something fit into the tree of life? (Yes.)
  • If we find something like it on another planet, should we celebrate? (Yes, especially because a parasite has to have a host nearby.)

When we think of viruses – fast moving, promiscuous gene-swappers, picking up genes from both each other and their hosts, polyphyletic, here from the beginning  – I think of a parasitic vine weaving around the tree of life. It’s not exactly an answer, but it’s a metaphor that’s closer to the truth.


* Carl Sagan’s definition of life, presented to and accepted by a committee at NASA, is “a self-sustaining chemical system capable of Darwinian evolution.” This nicer, neater definition folds in viruses, prions, and aliens. The 7-point system is the one I was taught in college, though, so I’m writing about that.

Throw a prediction party with your EA/rationality group

TL;DR: Prediction & calibration parties are an exciting way for your EA/rationality group to practice rationality skills and celebrate the new year.

On December 30th, Seattle Rationality had a prediction party. Around 15 people showed up, brought snacks, brewed coffee, and spent several hours making predictions for 2017, and generating confidence levels for those predictions.

This was heavily inspired by Scott Alexander’s yearly predictions. (2014 results, 2015 results, 2016 predictions.) Our move was to turn this into a communal activity, with a few alterations to meet our needs and make it work better in a group.

Procedure:

  • Each person individually writes a bunch of predictions for the upcoming year. They can be about global events, people’s personal lives, etc.
    • If you use Scott Alexander’s system, create 5+ predictions for fixed confidence levels (50%, 60%, 70%, 80%, 90%, 95%, etc.)
    • If you want to generate Brier scores or logarithmic scores, just do 30+ predictions at whatever confidence levels you believe.
  • Write down confidence levels for each prediction.
  • Save your predictions and put it aside for 12 months.
  • Open up your predictions and see how everyone did.

To make this work in a group, we recommend the following:

  • Don’t share your confidence intervals. Avoid anchoring by just not naming how likely or unlikely you think any prediction is.
  • Do share predictions. Generating 30+ predictions is difficult, and sharing ideas (without confidence levels) makes it way easier to come up with a bunch. We made a shared google doc, and everyone pasted some of their predictions into it.
  • Make predictions that, in a year, will verifiably have happened or not. (IE, not “the academic year will go well”, which is debatable, but “I will finish the year with a 3.5 GPA or above”.)
  • It’s convenient to assume that unless stated otherwise predictions that end by the next year (IE, “I will go to the Bay Area” means “I will go to the Bay Area at least once in 2017.”) It’s also fine to make predictions that have other end dates (“I will go to EA Global this summer.”)
  • Make a bunch of predictions first without thinking too hard about how likely they are, then assign confidence levels. This post details why. You could also generate a group list of predictions, and everyone individually lists their own confidence levels.

This makes a good activity for rationality/EA groups for the following reasons:

  • Practicing rationality skills:
    • Making accurate predictions
    • Using confidence intervals
  • Accessibility
    • It’s open to many different knowledge levels. Even if you don’t know a thing about geopolitics, you can still give predictions and confidence intervals about media, sports, or your own life.
    • More free-form and less intimidating than using a prediction market. You do not have to know about the details of forecasting to try this.
  • Natural time and recurring activity
    • You could do this at any point during the year, but doing it at the start of the year seems appropriate for ringing in the new year.
    • In twelve months, you have an automatic new activity, which is coming back together and checking everybody’s predictions from last year. Then you make a new set of predictions for next year. (If this falls through for some reason, everyone can, of course, still check their predictions on their own.)
  • Fostering a friendly sense of competitiveness
    • Everyone wants to have the best calibration, or the lowest Brier score. Everyone wants to have the most accurate predictions!

Some examples of the predictions people used:

  • Any open challenges from the Good Judgment Project.
  • I will switch jobs.
  • I will make more than $1000 money in a way that is different from my primary job or stock.
  • I will exercise 3 or more times per week in October, November, December.
  • I’ll get another tattoo.
  • Gay marriage will continue to be legal in Washington state.
  • Gay marriage will continue to be legal in all 50 states.
  • I will try Focusing at least once.
  • I will go to another continent.
  • CRISPR clinical trials will happen on humans in the US.
  • A country that didn’t previously have nuclear weapons will acquire them.
  • I will read Thinking Fast and Slow.
  • I will go on at least 3 dates.

Also relevant:

  • 16 types of useful predictions
  • Brier values and graphs of ‘perfect’ vs. actual scores will give you different information. Alexander writes about the differences between these. Several of us did predictions last year using the Scott Alexander method (bins at fixed probabilities), although this year, everybody seems to have used continuous probabilities. The exact method by which we’ll determine how well-calibrated we were will be left to Seattle Rationality of 2018, but will probably include Brier values AND something to determine calibration.

(Crossposted from LessWrong.)

Triptych in Global Agriculture

As I write this, it’s 4:24 PM in 2016, twelve days before the darkest day of the year. The sun has just set, but you’d be hard-pressed to tell behind the heavy layer of marbled gray cloud. There’s a dusting of snow on the lawns and the trees, and clumps on roofs, already melted off the roads by a day of rain. From my window, I can see lights glimmering in Seattle’s International District, and buildings of downtown are starting to glow with flashing reds, neon bands on the Colombia Tower, and soft yellow on a thousand office windows. I’m starting to wonder what to eat for dinner.

It’s the eve before Seattle Effective Altruism’s Secular Solstice, a somewhat magical humanist celebration of our dark universe and the light in it. This year, our theme is global agriculture – our age-old answer to the question of “what are we, as a civilization, collectively going to eat for dinner?” We have not always had good answers to this question.

Civilization, culture, and the super-colony of humanity, the city, started getting really big when agriculture was invented, when we could concentrate a bunch of people in one place and specialize. It wasn’t much specialization, at first. Farmers or hunter-gatherers were the vast majority of the population and the population of Ur, the largest city on earth, was around 65,000 people in 3000 BC. Today, farmers are 40% of the global population, and 2% in the US. In the 1890’s, the city of Shanghai had half a million people. Today, it’s the world’s largest city, with 34 million residents.

What happened in those 120 years, or even the last 5000?

Progress, motherfuckers.

I’m a scientist, so the people I know of are scientists, and science is what’s shaped a lot of our agriculture in the last hundred years. When I think of the legacy of science and global agriculture, of people trying to figure out how we feed everyone, I think of three people, and I’ll talk about them here. I’ll go in chronological order, because it’s the order things go in already.

Fritz Haber, 1868-1934

Fritz.jpg
Fritz Haber in his laboratory.

Haber was raised in a Jewish family in Prussia, but converted to Lutheranism after getting his doctorate in chemistry – possibly to improve his odds of getting high-ranking academic or military careers. At the University of Kulroch in Germany, Haber and his assistant Robert Le Rossignol did the work that won them a Nobel prize: they invented the Haber-Bosch process.

The chemistry of this reaction is pretty simple – it was a fact of chemistry at the time that if you added ammonia to a nickel catalyst, the ammonia decomposed into hydrogen and nitrogen. Haber’s twist was to reverse it – by adding enough hydrogen and nitrogen gas at a high pressure and temperature, the catalyst operates in reverse and combines the two into ammonia. Hydrogen is made from natural gas (CH4, or methane), and nitrogen gas is already 80% of the atmosphere.

Here’s the thing – plants love nitrogen. Nitrogen is, largely, the limiting factor in land plants’ growth – when you see that plants aren’t growing like mad, it’s because they don’t have sufficient nitrogen to make new proteins. When you give a plant nitrogen in a form it can assimilate, like ammonia, it grows like mad. The world’s natural solid ammonia deposits were being stripped away to nothing, applied to crops to feed a growing population.

When Haber invented his process in 1909, ammonia became cheap. A tide was turning. The limiting factor of the world’s agriculture was suddenly no longer limiting.

Other tides were turning too. In 1914, Germany went to war, and Haber went to work on chemical weapons.

During peace time a scientist belongs to the World, but during war time he belongs to his country. – Fritz Haber

He studied deploying chlorine gas, thinking that it would shorten the war. Its effect is described as “drowning on dry land”. After its first use on the battlefield, he received a promotion on the same night his wife killed herself. Clara Immerwahr, a fellow chemist, was a pacifist, and had shot herself with Haber’s military pistol. Haber continued his work. Scientists in his employ also eventually invented Zykkon B. First designed as a pesticide, after his death, the gas would be used to murder his extended family (along with many others) in the Nazi gas chambers.

Anti-Jewish sentiment was growing in the last few years of his life. In 1933, he wasn’t allowed through the doors of his institute. The same year, his friend, and fellow German Jewish scientist, Albert Einstein, went to the German Consulate in Belgium and gave them back his passport – renouncing his citizenship of the Nazi-controlled government. Haber left the country, and then died of a heart attack, in the next year.

I don’t know if Fritz Haber’s story has a moral. Einstein wrote about his colleague that “Haber’s life was the tragedy of the German Jew – the tragedy of unrequited love.” Haber was said to ‘make bread from air’ and said to be the father of chemical weapons. He certainly created horrors. What I might take from it more generally is that the future isn’t determined by whether people are good or bad, or altruistic or not, but by what they do, as well as what happens to the work that they do.

Nikolai Vavilov – 1887-1943

Nikolai.jpg
Vavilov in 1935.

We shall go into the pyre, we shall burn… But we shall not abandon our convictions. – Nikolai Vavilov

As a young but wildly talented agronomist in Russia, the director of the  Lenin All-Union Academy of Agricultural Sciences for over a decade, the shrewd and charismatic Nikolai Vavilov, wanted to make Russia unprecedented experts in agriculture. He went on a series of trips to travel the globe and retrieve samples. He observed that in certain parts of the world, one would find a much greater variety of a given crop species, with a wider range of characteristics and traits not seen elsewhere. This lead to his breakthrough theory, his Vavilov centers of diversity, that the greatest genetic diversity could be found where a species originated.

What has this told us about agriculture? This morning for breakfast, I had coffee (originally from Ethiopia) with soy milk (soybeans originally from China), toast (wheat from the Middle East) with margarine (soy oil, China, palm oil, West and Southwest Africa), and chickpeas (Central Asia) with black bean sauce (central or possibly South America) and pepper (India). One fairly typical vegan breakfast, seven centers of diversity.

He traveled to twelve Vavilov centers, regions where the world’s food species were originally cultivated. He traveled in remote regions of the world, gathering unique wheat and rye in the Hindu Kush, Spain, and Portugal, teff in Somalia, sugar beet and flax in the Mediterranean, potatoes in Peru, fava beans and pomegranates and hemp in Herat. He was robbed by bandits in Eritrea, and nearly died riding horseback along deep ravines in the Pamirs. The seeds he gathered were studied carefully back in Russia, tested in fields, and most importantly, cataloged and stored – by gathering a library of genetic diversity, Vavilov knew he was creating a resource that could be used to grow plants that would suit the country’s needs for decades to come. If a pest decimates one crop, you can find a resistant crop and plant it instead. If drought kills your rice, all you need to do is find a drought-tolerant strain of rice. At the Pavlovsk Experimental Research Station, Vavilov was building the world’s first seed bank.

vavilov centers.png
Vavilov Centers of the world. Image from Humanity Development Library of the NZDL.

In Afghanistan, he saw wild rye intermingled with wheat in the fields, and used this as evidence of the origin of cultivated rye: that it wasn’t originally grown intentionally the way wheat or barley had been, but that it was a wheat mimic that had slipped into farms and taken advantage of the nurturing protection of human farmers, and had, almost accidentally, become popular food plants  at the same time. Other Vavilovian mimics are oats and Camelina sativa.

While he travelled the world and became famous around the burgeoning global scientific community, Russia was changing. Stalin had taken over the government. He was collectivizing the farms of the country, and in the scientific academies, were dismissing staff based on bourgeois origin and increasing the focus on practical importance of work for the good of the people. A former peasant was working his way up through agricultural institutions: Trofim Lysenko, whose claimed that his theory of ‘vernalization’, or adapting winter crops to behave more like summer crops by treating the seeds with heat, would grow impossible quantities of food and solve hunger in Russia. Agricultural science was politicized in a way that it never had been – Mendelian genetics and the existence of chromosomes were seen as unacceptably reactionary and foreign. Instead, a sort of bastardized Lamarckism was popular – aside from being used by Lysenko to justify outrageous promises of future harvests that never quite came in, it said that every organism could improve its own position – a politically popular implication, but one which failed to hold up to experimental evidence.

Vavilov’s requests to leave the country were denied. His fervent Mendelianism and the way he fraternized with Western scientists were deeply suspicious to the ruling party. As his more resistant colleagues were arrested around him, his institute filled up with Lysenkoists, and his work was gutted. Vavilov refused to denounce Darwinism. Crops around Russia were failing under the new farming plans, and people starved as Germany invaded.

Vavilov’s devoted colleagues and students kept up his work. In 1941, the German Army reached the Pavlovsk Experimental Research Station, interested in seizing the valuable samples within – only to find it barren.

Vavilov’s colleagues had taken all 250,000 seeds in the collection by train into Leningrad. There, they hid them in the basement of an art museum and watched them in shifts all throughout the Siege of Leningrad. They saw themselves as protecting Russia’s future in agriculture. When the siege lifted in 1944, twelve of Vavilov’s scientists had starved to death rather than eat the edible seeds they guarded. Vavilov’s collection survived the war.

Gardening has many saints, but few martyrs. – T. Kingfisher

In 1940, Vavilov was arrested, and tortured in prison until he confessed to a variety of crimes against the state that he certainly never committed.

He survived for three years in the gulag. The German army advanced on Russia and terrorized the state. Vavilov, the man who had dreamed of feeding Russia, starved to death in prison in the spring of 1943. His seed bank still exists.

Vavilov’s moral, to me, is this: Science can’t be allowed to become politicized. Whatever the facts are, we have to build our beliefs around them, never the other way around.

Norman Borlaug, 1914-2009

Norman.jpg
Norman Borlaug in 1996. From Bill Meeks, AP Photo.

Borlaug was raised on a family farm to Norwegian immigrants in Iowa. He studied crop pests, and had to take regular breaks from his education to work: He worked in the Civilian Conservation Corps during the dustbowl alongside starving men, and for the Forest Service in remote parts of the country. In World War 2, he worked on adhesives and other compounds for the US MIlitary. In 1944, he worked on a project sponsored by the Rockefeller Foundation and the Mexican Ministry of Agriculture to improve Mexico’s wheat yields and stop it from having to import most of its grain. The project faced opposition from local farmers, mostly because wheat rust had been killing their crops. This wasn’t an entirely unique problem – populations were growing globally. Biologist Paul Erlich wrote in 1968, “The battle to feed all of humanity is over … In the 1970s and 1980s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.”

Borlaug realized that by harvesting seeds in one part of the country and quickly moving them to another, the government could take advantage of the country’s two growing seasons and double the harvest.

By breeding many wheat strains together, farmers could make crops resistant to many more diseases.

He spread the use of Haber’s ammonia fertilizers, and bred special semi-dwarf strains of wheat that held up to heavy wheat heads without bending, and grew better in nitrogen fertilizers.

Nine years later, Mexico’s wheat harvest was six times larger than it had been in 1944, and it had enough wheat to export.

Borlaug was sent to India in 1962, and along with Mankombu S. Swaminathan, they did it again. India was at war, dealing with famine and starvation, and was importing necessary grain for survival. They used Borlaug’s strains, and by 1968, were growing so much wheat that the infrastructure couldn’t handle it. Schoolhouses were converted into granaries.

His techniques spread. Wheat yields doubled in Pakistan. Wheat yields in the world’s least developed countries doubled. Borlaug’s colleagues used the same process on rice, and created cultivars that were used all over Asia. Borlaug saw a world devastated by starvation, recognized it for what it was, and treated it as a solvable problem. He took Haber’s mixed legacy and put it to work for humanity. Today, he’s known as the father of the Green Revolution, and his work is estimated to have saved a billion lives.

We would like his life to be a model for making a difference in the lives of others and to bring about efforts to end human misery for all mankind. – Statement from Borlaug’s children following his death


What’s next?

When I think of modern global agriculture, this is who I think of. I’ve been trying to find something connecting Vavilov and the Green Revolution, and haven’t turned up much – although it’s quite conceivable there is, given Vavilov’s inspirational presence and the way he shared his samples throughout the globe. Borlaug’s prize wheat strain that saved those billion lives, Norin 10-Brevor 14, was a cross between Japanese and Washingtonian wheat. Past that, who knows?

One of the organizations protecting crop diversity today is the Consultative Group for International Agricultural Research (CGIAR), which was founded in 1971 by the Rockefeller Foundation as the Green Revolution was in full swing. They operate a variety of research stations worldwide, mostly at Vavilov Centers in the global south where crop diversity is highest. Their mission is to reduce global poverty, improve health, manage natural resources, and increase food security.

They must have been inspired by Vavilov’s conviction that crop diversity is essential for a secure food supply. If a legacy that’s saved literally a billion human lives can be said to have a downside, it’s that diets were probably more diverse before, and now 12 species make up 75% of our food plant supply. Monocultures are fragile, and if conditions change, a single disease is more likely to take out all of a crop.

glamox
The Svalbard Seed Bank. Image from Glamox.

In 2008, CGIAR brought the first seed samples into the Svalbard Seed Vault – a concrete structure buried in the permafrost. It’s constructed as a refuge against whatever the world might throw. If electricity goes out, the permafrost will keep the seeds cool. If sea levels rise, the vault is built on a hill. The land it’s on is geologically stable and very remote. And it stores 1,500,000 seeds – six times more than Vavilov’s 250,000 – at no cost to countries that use it.

WorldHungerGraph.png

Let it be known: starvation is on its last legs. We have a good thing going here. Still, with global warming and worse things still looming over the shoulder of this tentative victory, let’s give thanks to the movers and shakers of global agriculture for tomorrow: the people ensuring that whatever happens next, we are going to be fed.

We are going to be eating dinner, dammit.

Happy Solstice, everyone.

Broad-spectrum biotechnologies and their implications for synthetic biosecurity

We live in a rather pleasant time in history where biotechnology is blossoming, and people in general don’t appear to be using it for weapons. If the rest of human existence can carry on like this, that would be great. In case it doesn’t, we’re going to need back-up strategies.

Here, I investigate some up and coming biological innovations with a lot of potential to help us out here. I kept a guiding question in mind: will biosecurity ever be a solved problem?

If today’s meat humans are ever replaced entirely with uploads or cyborg bodies, biosecurity will be solved then. Up until then, it’s unclear. Parasites have existed since the dawn of life – we’re not aware of any organism that doesn’t have them. When considering engineered diseases and engineered defenses, we’ve left the billions-of-years-old arms race for a newer and faster paced one, and we don’t know where an equilibrium will fall yet. Still, since the arrival of germ theory, our species has found a couple broad-spectrum medicines that have significantly reduced threat from disease: antibiotics and vaccines.

What technologies are emerging now that might fill the same role in the future?

Phage therapy

What it is: Viruses that attack and kill bacteria.

What it works against: Bacteria.

How it works: Bacteriophage are bacteria-specific viruses that have been around since, as far as we can tell, the dawn of life. They occur frequently in nature in enormous variety – it’s estimated that for every bacteria on the planet, there are 10 phages. If you get a concentrated stock of bacteriophage specific to a given bacteria, they will precisely target and eliminate that strain, leaving any other bacteria intact. They’re used therapeutically in humans in several countries, and are extremely safe.

Biosecurity applications: It’s hard to imagine even a cleverly engineered bacteria that’s immune to all phage. Maybe if you engineered a bacteria with novel surface proteins, it wouldn’t have phage for a short window at first, but wait a while, and I’m sure they’ll come. No bacteria in nature, as far as we’re aware, is free of phage. Phage have been doing this for a very, very long time. Phage therapy is not approved for wide use in the US, but has been established as being safe and quite effective. A small dose of phage can have powerful impacts on infection.

Current constraints: Lack of research. Very little current precedent for using phage in the US, although this may change as researchers hunt for alternatives to increasingly obsolete antibiotics.

Choosing the correct phage for therapeutics is something of an art form, and phage therapy tends to work better against some kind of infection than others. Also, bacteria will evolve resistance to specific phages over time – but once that happens, you can just find new phages.

DRACO

What it is: Double RNA Activated Capsase Oligomerizer. An RNA-based drug technology recently invented at MIT.

What it works against: Viruses. (Specifically, double-stranded RNA, single-stranded RNA, and double-stranded DNA (dsRNA, ssRNA, and dsDNA), which is most human viruses.)

How it works: DsDNA, dsRNA, and ssRNA virus-infected cells each produce long sequences of double-stranded RNA at some point while the virus replicates. Human cells make dsRNA occasionally, but it’s quickly cleaved into little handy little chunks by the enzyme Dicer. These short dsRNAs then go about, influencing translation of DNA to RNA in the cell. (Dicer also cuts up incoming long dsRNA from viruses.)

DRACO is a fusion of several proteins that, in concert, goes a step further than Dicer. It has two crucial components:

  • P that recognizes/binds viral sequences on dsRNA
  • P that triggers apoptosis when fused

Biosecurity applications: The viral sequences it recognizes are pretty broad, and presumably, it wouldn’t be hard to generate addition recognition sequences for arbitrary sequences found in any target virus.

Current constraints: Delivering engineered proteins intracellularly is a very new technology. We don’t know how well it works in practice.

DRACO, specifically, is extremely new. It hasn’t actually been tested in humans yet, and may encounter major problems in being scaled up. It may be relatively trivial for viruses to evolve a means of evading DRACO. I’m not sure that it would be trivial for a virus to not use long stretches of dsRNA. It could, however, evolve not to use targeted sequences (less concerning, since new targeting sequences could be used), inactivate some part of the protein (more concerning), or modify its RNA in some way to evade the protein. Even if resistance is unlikely to evolve on its own, it’s possible to engineer resistant viruses.

On a meta level, DRACO’s inventor made headlines when his NIH research grant ran out, and he used a kickstarter to fund his research. Lack of funding could end this research in the cradle. On a more meta level, if other institutions aren’t leaping to fund DRACO research, experts in the field may not see much potential in it.

Programmable RNA vaccines

What it is: RNA-based vaccines that are theoretically creatable from just having the genetic code of a pathogen.

What it works against: Just about anything with protein on its outside (virus, bacteria, parasite, potentially tumors.)

How it works: An RNA sequence is made that codes for some viral, bacterial, or other protein. Once the RNA is inside a cell, the cell translates it and expresses the protein. Since it’s not a standard host protein, the immune system recognizes and attacks it, effectively creating a vaccine for that molecule.

The idea for this technology has been around for 30-odd years, but the MIT team that discovered this were the first to package the RNA in a branched, virus-shaped structure called a dendrimer (which can actually enter and function in the cell.)

Biosecurity applications: Sequencing a pathogen’s genome should be quite cheap and quick once you get a sample of it. An associate professor claims that vaccines could be produced “in only seven days.”

Current constraints: Very new technology. May not actually work in practice like it claims to. Might be expensive to produce a lot of it at once, like you would need for a major outbreak.

Chemical antivirals

What it is: Compounds that are especially effective at destroying viruses at some point in their replication process, and can be taken like other drugs.

What it works against: Viruses

How it works: Conventional antivirals are generally tested and targeted against specific viruses.

The class of drugs called thiazolides, particularly nitazoxanide, is effective against not only a variety of viruses, but a variety of parasites, both helminthic (worms) and protozoan (protists like Cryptosporidum and Giardia.) Thiazolides are effective against bacteria, both gram positive and negative (including tuberculosis and Clostridium difficile). And it’s incredibly safe. This apparent wonderdrug appears to disrupt creation of new viral particles within the infected cell.

There are others, too. For instance, beta-defensin P9 is a promising peptide that appears to be active against a variety of respiratory viruses.

Biosecurity applications: Something that could treat a wide variety of viruses is a powerful tool against possible threats. It doesn’t have to be tailored for a particular virus- you can try it out and go.

Current constraints: Discovery of new antibiotics has slowed down. Antivirals are a newer field, but the same trend may hold true.

Also, using a single compound drastically increases the odds that a virus will evolve resistance. In current antiviral treatments, patients are usually hit with a cocktail of antivirals with different mechanisms of action, to reduce the chance of a virus finding resistance of them.

Space for finding new antivirals seems promising, but they won’t solve viruses any more than antibiotics have solved bacterial infections – which is to say, they might help a lot, but will need careful shepherding and combinations with other tactics to avoid a crisis of resistance. Viruses tend to evolve more quickly than bacteria, so resistance will happen much faster.

Gene drives

What it is: Genetically altering organisms to spread a certain gene ridiculously fast – such as a gene that drives the species to extinction, or renders them unable to carry a certain pathogen.

What it works against: Sexually reproducing organisms, vector-borne diseases (with sexually reproducing vectors.)

How it works: See this video.

Biosecurity applications: Gene drives have been in the news lately, and they’re a very exciting technology – not just for treating some of the most deadly diseases in the world. To see their applications for biosecurity, we have to look beyond standard images of viruses and bacteria. One possible class of bioweapon is a fast-reproducing animal – an insect or even a mouse, possibly genetically altered, which is released into agricultural land as a pest, then decimates food resources and causes famine.

Another is release of pre-infected vectors. This has already been used as a biological weapon, including Japan’s infamous Unit 731, which used hollow shells to disperse fleas carrying the bubonic plague into Chinese villages. Once you have an instance of the pest or vector, you can sequence its genome, create a genetic modification, and insert the modification along with the gene drive sequences. This can either wipe the pest out, or make it unable to carry the disease.

Current constraints: A gene drive hasn’t actually been released into the wild yet. It may be relatively easy for organisms to evolve strategies around the gene drive, or for the gene drive genes to spread somehow. Even once a single gene drive, say, for malaria, has been released, it will probably have been under deep study for safety (both directly on humans, and for not catastrophically altering the environment) in that particular case – the idea of a gene drive released on short notice is, well, a little scary. We’ve never done this before.

Additionally, there’s currently a lot of objection and fears around gene drives in society, and the idea of modifying ecosystems and things that might come into contact with people isn’t popular. Due to the enormous potential good of gene drives, we need to be very careful about avoiding public backlash to them.

Finding the right modification to make an organism unable to carry a pathogen may be complicated and take quite a while.

Gene drives act on the pest’s time, not yours. Depending on the generation time of the organism, it may be quite a while before you can A) grow up enough of the modified organism to productively release, and B), wait while the organism replicates and spreads the modified gene to enough of the population to have an effect.

Therapeutic antibodies

What it is: Concentrated stocks of antibodies similar to the ones produced in your own body, specific to a given pathogen.

What it works against: Most pathogens, some toxins, cancers.

How it works: Antibodies are proteins produced by B-cells as part of the adaptive immune system. Part of the protein attaches to a specific molecule that identifies a virus, bacteria, toxin, etc.. The rest of the molecule acts as a ‘tag’ – showing other cells in the adaptive immune system that the tagged thing needs dealt with (lysed, phagocytosed, disposed of, etc.)

Biosecurity applications: Antibodies can be found and used therapeutically against a huge variety of things. The response is effectively the same as your body’s, reacting as though you’d been vaccinated against the toxin in question, but it can be successfully administered after exposure.

Current constraints: Currently, while therapeutic antibodies are used in a few cases like snake venom and tumors, they’re extremely expensive. Snake antivenom is taken from the blood serum of cows and horses, while more finicky monoclonal therapeutics are grown in tissue culture. Raising entire animals for small amounts of serum is pricey, as are the nutrients used for tissue culture.

One possible answer is engineering bacteria or yeast to produce antibodies. These could grow antibodies faster, cheaper, and more reliably than cell culture. This is under investigation – E. coli doesn’t have the ability to glycosylate proteins correctly, but that can be added in with genetic engineering, and anyways, yeasts can already do that. The promise of cheap antibody therapy is very exciting, and more basic research in cell biology will get us there faster.

The bipartisan model of androgynous gender presentation

[Content warning: Talking about ways that people automatically gender other people. If this is a tough topic for you, be careful. Also, a caveat that I’m talking descriptively, not prescriptively, about people’s unconscious and instant ways of determining gender, and not A) what they might actually think about someone’s gender, and certainly not B) what anyone’s gender actually is.

Nonetheless, if I got anything wildly or offensively inaccurate, please do let me know.]

When you try and figure out a stranger’s gender, you don’t just use one physical trait – you observe a variety of traits, mentally assign them all evidence weights, compare them to any prior beliefs you might have on the situation, and then – usually – your brain spits out a “man!” or “woman!” This is mostly unconscious and happens in under a second.

This is called “Bayesian reasoning” and it’s really cool that your brain does it automatically. Most people have some male, some female, and some neutral signals going on. ‘Long hair’ is usually a female signal, but if it’s paired with a strong jawline, heavy brows, and a low voice on someone who’s 6’5”, you’ll probably settle on ‘male’. Likewise, ‘wearing a suit’ is usually a pretty good male signal, but if the person is wearing makeup and is working at a hotel where everyone is wearing suits, you’re more likely to think ‘female’.

Then there are people with androgynous gender presentations – the people who you look at and your brain stumbles, or else does spit out an answer, but with doubt. (As a cis but not-particularly-gender-conforming woman, this is people around me all the time.) When people are read as ‘androgynous’, I think they’re doing three possible things:

  1. Strong male and female signals. Think a dress and a beard, or a high-pitched voice and being 6’4” and muscular, or wearing a suit and eyeliner. Genderfuck is an aesthetic that goes for this.

Left: Drag queen Conchita Wurst. Right: Game of Thrones character Brienne of Tarth.

2) No gender signals. Not giving gender cues, or trying to fall in the middle of any that exist on a spectrum. I think of this one as usually involving de-emphasized secondary sex characteristics – flat chest, no facial hair – which might also mean a youthful, neotenous look. Or maybe a voice or hips or height or whatever that’s sort of in the middle. Some (but not all!) androgynous models have something like this going on.

Left: Model Natacha S. Right: Zara’s Ungendered fashion line.

Fashion-wise, every now and then a company that rolls out a gender-neutral clothing line is criticized because all the clothing is baggy, formless, and vaguely masculine. (See comments below on why this may be.) I think these bland aesthetics are going for ‘No Signals’ – baggy clothing conceals secondary sex characteristics, the plain colors call to mind sort of a blank slate.

3) Signals for Something Else. For a trait that would normally signal gender, signal something else entirely. Long hair is for women, short hair is for men, but a green mohawk isn’t either of those. You might speak in a high-pitched voice, or a low-pitched voice, or in falsetto with an accent. Men wear pants, women wear dresses, but nobody wears this:

Pictured: I don’t know what these people are signalling, but it’s sure not a binary gender. [New York Fashion Week, 2015.]

What does this imply?

I’m not sure.

I expect that people who do No Signals get less shit from bigots (harassment, violence, weird looks) than people in the other two categories (Mixed Signals or Signaling Something Else.) I would imagine that bigots are more likely to figure that No Signals people are clearly a binary gender that they just can’t read, whereas Mixed Signals people are perceived as intentionally going against the grain.

This is unfortunate, because if you want to be read as androgynous, it’s way easier to just do Mixed Signals than to conceal secondary sex characteristics in order to do No Signals. (Especially if your secondary sex characteristics happen to be more pronounced.) Fortunately, society in general seems to be moving away from ‘instant gender reads are your real gender’, and towards ‘there are lots of different ways to do gender and gender presentation’.

Signaling Something Else people probably also get harassment and weird looks, but possibly more because they’re non-conforming in ways that don’t have to do with gender.

Male Bias in Gender Interpretation

Also! There is a known trend that suggests that people are more likely to read ambiguous traits as male than female. This is probably because ‘male’ is seen as ‘the default’, because culture. See: non-pet animals, objects other than cars and ships. This seems to have originally come from Kessler & McKenna (1978), and has held up in a few studies. I’m not sure if this rule is completely generalizable, but here’s a few things it might imply:

You may actually have to have more feminine traits than masculine ones to hit the Confusion Zone. For gender-associated traits that go on a spectrum – chest size, voice pitch, some metric of facial shape, etc., it might look like this:

graph1

Of course, there are also cases where people think a trait is associated with gender when, really, it’s not. That still might mean something like this:

14646614_10210490978858879_848724627_o

(See also.)

One conclusion I’ve heard drawn from this: This explains why it’s often harder for trans women to get automatically gendered correctly, than for trans men. A trans woman has to conceal or remove a lot of ‘male’ traits to get read as female. Trans men, meanwhile, don’t have to go as far to hit ‘male’.

Even gender distribution world

Let’s say there are 100 gendered traits (wearing a dress or pants, long or short hair, facial hair or no facial hair, etc.) Now let’s imagine a population where everybody in this population has the “male” or “female” version of each trait assigned independently and randomly. If the male-bias principle generalizes, you’re likely to read more than half of these people are male.

Regional differences?

Gender presentation, and thus how you read gender, is deeply rooted in culture! If you see someone in garb from a culture you’re not familiar with, and you can’t tell their gender, it’s quite possible that they’re still doing intentional gender signals – just not in a way you can read.

Even for similar cultures, this might be different. When I was in England, people called me ‘sir’ all the time. This doesn’t happen often in Seattle. I have three theories for why:

  1. People in England have different gendered trait distributions for deciding gender. Maybe in England, just seeing ‘tall’ + ‘short hair’ + ‘wearing a collared shirt’ is enough to tip the scale to ‘man.’
  2. Where I was in England was just more culturally conservative than Seattle, and if I spent more time in, say, small towns in Southern or Midwest US, I’d also be ‘sir’d’ more.
  3. People in England are more likely to say ‘sir’ or ‘m’am’ at all. So if you were to ask a bunch of Seattle and England strangers if I was a man or a woman, the same percent would say ‘man’, but I wouldn’t notice in Seattle.

I think 2 or 3 are more likely, but 1 would be interesting as well.

Post Notes

  • Ben Hoffman pointed out that this maps to classifications for people who don’t consistently vote for a major political party. Mixed Signals people are like swing voters or nonpartisan voters. No Signals people are political moderates or don’t vote at all. Signaling Something Else people are, like, anarchists. Or Pirate Party members.
  • The Bayesian Evidence model of gender identification doesn’t only apply when the result is inconclusive – often your brain will, say, match someone as ‘man’, but also observe that they’re doing some non-masculine things.

(The first thing to consider in this case is that your brain may be wrong, and they may not actually be a man at all.)

  • Anyways, what gender people are and what they signal to the world is more complex than an instantaneous read, and this is an important distinction. For instance, even when people look at me and think ‘woman’, they can tell that I’m not doing standard femininity either.
  • If you’re trying to cultivate auto-gendering people less often, I suspect that training your subconscious to quickly separate whatever traits from gender would be useful. Finding efficient ways to do this is left as an exercise to the reader.
  • It’s obviously possible to train your brain to look at someone and mentally assign them a gender other than the instantaneous response. I’ve also heard stories of people looking at people and automatically going “nonbinary”. I suspect that if you grew up in binary-gendered society, as so many of us tragically did, this is a thing you developed later in life. Maybe you learned this as a possible answer to the “confusion on gendering androgynous people” brain-state.

Biotic replacement and evolutionary innovation as a global catastrophic risk

[Image: “Disckonsia Costata” by Verisimilius is licensed under CC BY-SA 3.0]

[This post has also been published on the Global Risk Research Network, a group blog for discussing risks to humanity. Take a look if you’d like more excellent articles on global catastrophic risk.]

Several times in evolutionary history, the arrival of an innovative new evolutionary strategy has lead to a mass extinction followed by a restructuring of biota and new dominant life forms. This may pose an unlikely but possible global catastrophic risk in the future, in which spontaneous evolutionary strategies (like new biochemical pathways or feeding strategies) become wildly successful, and lead to extreme climate change and die-offs. This is also known as a ‘biotic replacement’ hypothesis of extinction events.

  1. Biotic replacement in past extinctions
  2. Is this still a possible risk?
  3. Risk factors from climate change and synthetic biology
  4. The shape of the risk
  5. What next?

Identifying specific causes of mass extinction events may be difficult, especially since mass extinctions tend to be quickly followed by expansion of previously less successful species into new niches. A specific evolutionary advantage might be considered as the cause when either no other major physical disruptions (asteroids, volcanoes, etc) were occurring, or when our record of such events doesn’t totally explain the extinctions.

1. Biotic replacement in past extinctions

There are five canonical major extinction events that have occurred since the evolution of multicellular life. Biotic replacement has been hypothesized as either the major mechanism for two of them: the late Devonian extinction and the Permian-Triassic extinction. I outline these, as well as four other extinction events.

Great oxygenation event

2.3 billion years ago

Cyanobacteria became the first microbes to produce oxygen (O2) as a waste product, and began forming colonies 200 million years before the extinction event. O2 was absorbed into dissolved iron or organic matter, and the die-off began when these naturally occurring oxygen sinks became saturated, and toxic oxygen began to fill the atmosphere.

The event was followed by die-offs, massive climate change, and permanent alteration of the earth’s atmosphere, and eventually the rise of the aerobic organisms.

End-Ediacaran extinction

542 million years ago

The Ediacaran period was filled with a variety of large, autotrophic, sessile organisms of somewhat unknown heritage, known today mostly by fossil evidence. Recent evidence suggests that one explanation for this is the evolution of animals, able to move quickly and and re-shape ecosystems. This resulted in the extinction of Ediacaran biota, and was followed by the Cambrian explosion in which animal life spread and diversified rapidly.

Late Devonian extinction

375-360 million years ago

19% of families and 50% of genera became extinct.

Both modern plant seeds and modern plant vascular system developed in this period. Land plants grew significantly as a result, now able to more efficiently transport water and nutrients higher – with maximum heights changing from 30 cm to 30 m. Two things would have happened as a result:

  • The increase in soil content produced more weathering in rocks, which released ionic nutrients into rivers. The nutrient levels would have increased plant growth and then death in oceans, resulting mass anoxia.
  • Less atmospheric carbon dioxide would have cooled the planet.

Permian-Triassic extinction

252 million years ago

96% of marine species, and 70% of land vertebrate species went extinct. 57% of families and 83% of general became extinct.

One hypothesis explaining the Permian-Triassic extinction events posits that an anaerobic methanogenic archaea, Methanosarcina, developed a new metabolic pathway allowing them to metabolize acetate into methane, leading to exponential reproduction and consuming vast amounts of oceanic carbon. Volcanic activity around the same time would have released large amounts of nickel, a crucial but rare cofactor needed for Methanosarcina’s enzymatic pathway.

Azolla event

49 million years ago

Dead members of especially efficient fern genus built up in the ocean over 800,000 years and created a massive carbon sink, leading to a snowball earth scenario and mass global cooling.

Quaternary and Holocene extinction events

12,000 years ago –> ongoing.

The evolution of human intelligence and human civilization has lead to mass climate alteration by humans. Another set of adaptations among human society (IE agriculture, use of fossil fuels) could be considered here, but in terms of this hypothesis, the evolution of human intelligence and civilization could be considered to be the driving evolutionary innovation.

Minor extinction events

Any single species that goes extinct due to a new disease can be said to have become extinct due to another organism’s innovative adaptation. These are less well described as “biotic replacement”, because the new pathogen won’t be able to replace its extinct hosts, but it was still an evolutionary event that caused the disease. A new disease may also attack the sole or primary food source of an organism, leading to its extinction indirectly.

2. Is this still a possible risk?

It seems unlikely that all possible disruptive evolutionary strategies have already happened: Disruptive new strategies are rare – while billions of new mutations arise every day, any new gene must meet stringent criteria in order to spread: Is actually expressed, is passed on to progeny, immediately conveys a strong fitness benefit to its bearer, serves any vital function of the old version of the gene, is supported by the organism’s other genes and environment, and the organism isn’t killed by random chance before having the chance to reproduce. For instance, an unusually efficient new metabolic pathway isn’t going to succeed if it’s in a non-reproducing cell, if its byproducts are toxic to the host organism, if its host can’t access the food required for the process, or if its host happens to be born during a drought and starves to death anyways.

Environmental conditions that make a pathway more or less likely to be ridiculously successful, meanwhile, are constantly changing. Given the rareness of ridiculously successful genes, it seems foolhardy to believe that evolution up til now has already picked all low-hanging fruit.

How worried should we be? Probably, not very. The major extinction events listed above seem to be spaced by 100-200 million years, suggesting a 1-in-100,000,000 chance of occurring in any given year. For comparison, NASA estimates that asteroids causing major extinction events strike the earth every 50-100 million years. These threats are possibly on the same orders of magnitude.

(This number requires a few caveats: This is a high estimate, assuming that evolutionary advantages were a major factor in all cases. Also, an advantage that “starts” in one year may take millions of years to alter the biosphere or climate catastrophically. Once in 100 million years is also an average – there’s no reason to believe that disruptive evolutionary events, or asteroid strikes for that matter, occur on regular intervals.)

On a smaller scale, entire species are occasionally wiped out by a single disease. This is more likely to happen when species are already stressed or in decline. Data on how often this happens, or what fraction of extinctions are caused by a novel disease, is hard to find.

3. Risk factors from climate change and synthetic biology

Two risk factors are worth noting which may increase the odds of a biotic replacement event – climate change and synthetic biology.

Historically, a catastrophic evolutionary innovation seems to follow other massive climate disruption, as in the Permian-Triassic extinction explanation that followed volcanic eruptions. A change in conditions may select for innovative new strategies that quickly take over and produce much more disruption than the instigating geological event.

While the specific nature of the next disruptive evolutionary innovation may be nigh-impossible to predict, this suggests that we should give more credence to environment alteration as a threat – via climate change, volcanic eruptions, or asteroids – as changing environments will select for disruptive new alleles (or resurface preserved strategies.) This means that a minor catastrophic event could snowball into a globally catastrophic or existential threat.

The other emerging source of alleles as-of-yet unseen in the environment comes from synthetic biology, as scientists are increasingly capable of combining genes from distinct organisms and designing new molecular pathways. While genes crossing between wildly different organisms is not unheard of in nature, the increased rate at which this is being done in the laboratory, and the fact that an intentional hand is selecting for viability and novelty (rather than natural selection and random chance), both imply some cause for alarm.

A synthetic organism designed for a specific purpose, may disperse from its intended environment and spread widely. This is probably especially a risk for organisms using completely synthetic and novel pathways unlikely to have evolved in nature, rather than previously evolved genes – otherwise, the naturally occurring genes would have probably already seized low-hanging evolutionary fruit and expanded into possible niches.

4. The shape of the risk

How does this risk compare to other existential risks? It is not especially likely to occur, as described in Part 2. The precise shape or cause of the risk is harder to determine than, say, an asteroid strike. Also, as opposed to asteroid strikes or nuclear wars, which have immediate catastrophic effects, evolutionary innovations involve significant time delays.

Historically, two time delays appear to be relevant:

  • Time for the evolution to become widespread

Presumably, this is quicker in organisms that disperse/reproduce more quickly. EG, this could be fairly quickly for an oceanic bacteria with a quick generation cycle, but slowly for the 180,000 years it took between the first appearance of modern humans, and their eventual spread to the Americas.

  • Time between the organism’s dispersal and the induction of a catastrophe

EG, during the global oxygen crisis, it took 200 million years from the evolution of the species, to when the possible oxygen sinks filled up, for a crisis to occur. (At least some of this time included the period required for cyanobacteria to diversify and become commonplace.)

During the azolla event, azolla ferns accumulated for 800,000 years causing steady climate change. The modern threat from anthropogenic global warming is much steeper than that.

What are the actual threats to life?

  • Climate change
    • The great oxygenation event and the Permian-Triassic extinction hypothesis involve the dispersal of a microbe that induces rapid, extreme climate change.
    • Other events such as volcanoes erupting may change the environment such that a new strategy becomes especially successful, as in the Permian-Triassic extinction event.
  • Faster, stronger, cleverer predation
    • The Ediacaran extinction event and the Holocene extinction event involved the dispersal of an unprecedentedly capable predator – animals and humans, respectively.
    • This seems unlikely to be a current risk. The risk from runaway artificial intelligence somewhat resembles this concern.
  • Death from disease
    • Any event in which a novel disease causes a species to go extinct has a direct impact. Additionally, a disease might cause one or more major food sources to go extinct (for humans or animals.)
    • Globalization and global trade has increased the risk of a novel disease spreading worldwide. This also mirrors current concerns over engineered bioweapons.

5. What next?

Disruptive evolutionary innovation is problematic in that there don’t appear to be clear ways of preventing it – evolution has been indiscriminately optimizing away for billions of years, and we don’t appear to be especially able to stop it. Building civilization-sustaining infrastructure that is more robust to a variety of climate change scenarios may increase our odds of surviving such a catastrophe. Additionally, any such disruptive event is likely to happen over a long period of time, meaning that we could likely mitigate or prepare for the worst effects. However, evolutionary innovation hasn’t been explored or studied as an existential risk, and more research is needed to clarify the magnitude of the threat, or which – if any – interventions are possible or reasonable to study now.

Questions for further study:

  • How common are extinction events due to disruptive evolutionary innovation?
  • What factors make these evolution events more likely?
  • How often do species go extinct due to single disease outbreaks?
  • Can small-scale models help us improve our understanding of the likelihood of global warming inducing “runaway” scenarios involving microbial evolution?
  • What man-made environmental changes could potentially lead to runaway microbial evolution?

Science for Non-Scientists: How to read a journal article

Scientific journal writing has a problem:

  1. It’s the main way scientists communicate their findings to the world, in some ways making it the carrier of humanity’s entire accumulated knowledge and understanding of the universe.
  2. It’s terrible.

It’s terrible for two reasons: accessibility and approachability. This first post in this series discussed accessibility: how to find papers that will answer a particular question, or help you explore a particular subject.

This post discusses approachability: how to read a standard scientific journal article.


Scientific papers are written for scientists in whatever field the journal they’re published in caters to. Fortunately, most journal articles are also written in such a way that you can figure out what they’re saying even if you’re a layperson.

(Except for maybe math or organic chemistry synthesis. But if you’re reading about math or organic chemistry as a layperson, you’re in God’s hands now and I can’t help you.)

Okay, so you’ve got your 22-page stack of paper on moose feeding habits, or the effects of bacteriophage on ocean acidification, or gravitational waves, or whatever. What now? There are two cardinal rules of journal articles:

  1. You usually don’t have to read all of it.
  2. Don’t read it page by page.

Journal articles are conveniently broken into sections. (They often use the names given, or close synonyms.) I almost always read them in the following order:

20160705_110250

1. Abstract

The abstract is the TL;DR of the article, the summary of what the studies found. Conveniently, it’s first. The abstract is very useful for determining if you actually want to read the rest of the article or not. Abstracts often have very dense, technical language, so if you don’t understand what’s going on in the abstract, don’t sweat it.

2. Introduction

As a layperson, the introduction is your best friend. It’s designed to bring the reader from only a loose understanding of the field, to “zoom in” to the actual study. It’s supposed to build the context you need to understand the experiment itself. It gives a background to the field, what we already know about the topic at hand, historical context, why the researchers did what they did, and why it’s important. It’ll define terms and acronyms that will be crucial to the rest of the paper.

It may not actually be easy language. At this point, if you encounter a term or concept that’s unfamiliar (and that the researchers don’t describe in the introduction), start looking it up. Just type it into Wikipedia or Google, and if what you get seems to be relevant, that’s probably it.

3. Conclusions

In a novel, skipping to the end to see how the suspense plays out is considered “bad form” and “not the point.” When reading papers, it’s a sanity-saving measure. In this part of the paper, the researchers write about what conclusions they’re drawing from their studies,and its implications. This is also done in fairly broad strokes that put it in context of the rest of scientific understanding.

4. Figures

Next, go to the figures that are strewn around the results section, just before the conclusions. (Some papers don’t have figures – in that case, just read the results.) Figures will give you a good sense of the actual results of the experiments. Also read the captions – captions on figures are designed to be somewhat stand-alone, as in that you don’t have to read everything else in the paper to tell what’s going on in the figures.

Depending on your paper, you might also get actual pictures of the subject that illustrate some result. Definitely look at these. Figure out what you’re looking at and what the pictures are supposed to be telling you. Google anything you don’t understand, including how the images were obtained if it’s relevant.

In trying to interpret figures, look at the labels and axes – what’s being compared, and what they’re being measured by. Lots of graphs include measurements taken over time, but not all. Some figures include error measurements – each data point on a graph might have been the average of several different data points in individual experiments, and error measures how different those data points were from each other. A large percent error (or error bar, or number of standard deviations, etc) means the original data points were far apart from each other, small error means that they were all close to the average value. If you see a type of graph that you’re not sure how to read, Google it.

5. Results

The section that contains figures also contains written information about the researchers actually observed in the experiments they ran. They also usually include statistics, IE, how statistically significant a given result is in the context of the study. The results are what the conclusions were interpreting. They may also describe results or observations that didn’t show up in figures.

Maybe read:

Methods

Methods are the machinery of the paper – the nuts-and-bolts, nitty-gritty of how the experiments were done, what was combine, where the samples came from, how it was quantified. It’s critical to science because it’s the instructions for how other researchers can check what you did and see if they can replicate the results – but I’d also rather read Youtube comments on political debates than read methods all day. I’ll read the methods section under the following circumstances:

  • I’m curious about how the study was done. (You do sometimes get good stuff, like in this study where they anesthetized snakes and slid them down ramps, then compared them to snakes who slid down ramps while wearing little snake socks to compare scale friction.)
  • I think the methodology might have been flawed.
  • I’m trying to do a similar experiment myself.
snakes on a plane.gif
Snakes on a plane! || Gif from this video.

Works cited

Papers cite their sources throughout the paper, especially in the introduction. If I want to know where a particular fact came from, I’ll look at the citation in the works cited section, and look up that paper.

Acknowledgement/Conflicts of Interest

Science is objective, but humans aren’t. If your paper on “how dairy cows are super happy on farms” was sponsored by the American Dairy Association and Dairy Council, consider that the researchers would be very biased to come to a particular conclusion and keep receiving funding. If the researchers were employed by the American Dairy Association and Dairy Council, I’d be very tempted to just throw out the study.

Science for Non-Scientists: How to find scientific literature

Scientific journal writing has a problem:

  1. It’s the major way scientists communicate their findings to the world, in some ways making it the carrier of humanity’s entire accumulated knowledge and understanding of the universe.
  2. It’s terrible.

This has two factors: Accessibility and approachability. Scientific literature isn’t easy to find, and much of it is locked behind paywalls. Also, most scientific writing is dense, dull, and nigh-incomprehensible if you’re not already an expert. It’s like those authors who write beautiful works of literature and poetry, and then keep it under their bed until they die – only the poetry could literally be used to save lives.  There are systematic issues with the way we deal with scientific literature, but in the mean time, there are also some techniques that make it easier to deal with.

This first post in this series will discuss accessibility: how to find papers that will answer a particular question or help you explore a subject.

The second post in this series discusses approachability: how to read a standard scientific journal article.


How to Find Articles

Most scientific papers come from a small group of researchers who do a series of experiments on a common theme or premise, then write about what they learned. If your goal is to learn more about a broad subject, ask yourself if a paper is actually what you want. Lots of quality, scientifically rigorous information can be obtained in other ways – textbooks, classes, summaries, Wikipedia, science journalism.

 

blog science stack
The great food web of “where does scientific knowledge come from anyways?”

When might you want to turn to the primary literature? If you’re looking at very new research, if you’re looking at a contentious topic, if you’re trying to find a specific number or fact that just isn’t coming up anywhere else, if you’re trying to fact-check some science journalism, or if you’re already familiar enough with the field that you know what’s on Wikipedia already.

You can look at the citations of a journal article you already like. Or, find who the experts in a field are (maybe by looking at leaders of professional organizations or Wikipedia) and read what they’ve written. Most science journalism is also reporting on a single new study, which should be linked in the article’s text.

If you have access to a university library, ask them about tools to search databases of journal articles. Universities subscribe to many reliable journals and get their articles for free. Your public library may also have some.

Google Scholar is a search engine for academic writing. It has both recent and very old papers, and a variety of search tools. It pulls both reliable and less reliable sources, and both full-text and abstract-only articles (IE, articles where the rest is behind a paywall.) Clicking “All # Versions” at the bottom of each result will often lead you to a PDF of the full text.

If you’ve found the perfect paper but it’s behind a paywall- well, welcome to academia. Don’t give up. First up, put the full name of the article, in quotes, into Google. Click on the results, especially on PDFs. It’ll often just be floating around, in full, on a different site.

If that doesn’t work, and you don’t have access through a library, well… Most journals will ask you to pay them a one-time fee to read a single article without subscribing. It’s often ridiculous, like forty dollars. (Show of hands, has anyone reading this ever actually paid this?)

But this is the modern age, and there are other options. “Isn’t that illegal?” you may ask. Well, yes. Don’t do illegal things. However, journals follow two models:

  1. Open content access, researchers pay to submit articles
  2. Content behind paywalls, researchers can submit articles for free

As you can see, fees associated with journals don’t actually go to researchers in either model. There are probably some reasonable ethical objections to downloading paywalled-articles for free, but there are also very reasonable ethical objections to putting research behind paywalls in general.

How good is my source?

Surprise! There’s good science and bad science. This is a thorny issue that might be beyond my scope to cover in a single blog post, and certainly beyond my capacity to speak to every field on. I can’t just leave you here without a road map, so here are some guidelines. You’ll probably have two goals: avoiding complete bullshit and finding significant results.

Tips for avoiding complete bullshit

  • Some journals are more reliable than others. Science and Nature are the behemoths of science and biology (respectively), and have extremely high standards for content submission. There are also other well-known journals in each field.
  • Well-known journals are unlikely to publish complete bullshit. (Unless they’re well known for being pseudoscience journals.)
  • You can check a journal’s impact score, or how well-cited their work tends to be, which is sort of a metric for how robust and interesting the papers they publish are. This is a weird ouroboros: researchers want to submit to journals with high impact scores, and journals want to attract articles that are likely to be cited more often – so it’s not a perfect metric. If a journal has no impact score at all, proceed with extreme caution.
  • Watch out for predatory journals and publishers. Avoid these like the plague, since they will publish anything that gets sent to them. (What is a predatory journal?)
  • Make sure the journal hasn’t issued a retraction for the study you’re reading.

Once you’ve distinguished “complete bullshit” from “actual data”, you have to distinguish “significant data” from “misleading data” or “fluke data”. Finding significant results is much tougher than ruling out total bullshit – scientists themselves aren’t always great at it – and varies depending on the field.

Tips for finding significant results

  • Large sample sizes are better than small sample sizes. (IE, a lot of data was gathered.)
  • If the result appears in a top-level journal, or other scientists are praising it, it’s more likely to be a real finding.
  • Or if it’s been replicated by other researchers. Theoretically, all research is expected to replicate. In practice, it sometimes doesn’t, and I have no idea how to check if a study has been replicated.
  • If a result runs counter to common understanding, is extremely surprising, and is very new, proceed with caution before accepting the study’s conclusions as truth.
  • Apply some common sense. Can you think of some other factor that would explain the results, that the authors didn’t mention? Did the experiment run for a long enough amount of time? Could the causation implied in the paper run other ways (EG, if a paper claims that anxiety causes low grades: could it also be that low grades cause anxiety, or that the same thing causes both anxiety and low grades?), and did the paper make any attempt to distinguish this? Is anything missing?
  • Learn statistics.

If you’re examining an article on a controversial topic, familiarize yourself with the current scientific consensus and why scientists think that, then go in with a skeptical eye and an open mind. If your paper gets an opposite result from what most similar studies say, try to find what they did differently.

Scott Alexander writes some fantastic articles on how scientists misuse statistics. Here are two: The Control Group is Out of Control, and Two Dark Side Statistical Papers. These are recommended reading, especially if your subject is contentious, and uses lots of statistics to make its point.


Review articles and why they’re great

The review article (including literature reviews, meta-analyses, and more) is the summary of a bunch of papers around a single subject. They’re written by scientists, for scientists, and published in scientific journals, but they’ll cover a subject in broader strokes. If you want to read about something in more detail than Wikipedia, but broader than a journal article – like known links between mental illness and gut bacteria – review articles are a goldmine. Authors sometimes also use review articles to link together their own ideas or concepts, and these are often quite interesting.

If an article looks like a normal paper, and it came from a journal, but it doesn’t follow the normal abstract-introduction-methods-discussion-conclusion format, and subject headings are descriptive rather than outlining parts of an experiment, it might be a review article. (Sometimes they’re clearly labelled, sometimes not.) You can read these the same way you’d read a book chapter – front to back – or search anywhere in it for whatever you need.

What if you can’t find review articles about what you want, or you need more specificity? In that case, buckle up. It’s time to learn how to read an article.

If Hollywood made “Ex Machina” but switched the genders

[Content note: Discussion of weird gender dynamics, acknowledgement of the existence of sex, spoilers for the movie Ex Machina.]

I watched Ex Machina recently. (Due time- it’s been out for over a year.) The people who recommended it to me, whom I watched it with, and whom I discussed it with afterwards, were mostly artificial intelligence nerds, many of whom praised the movie’s better-than-average approach to AI.

And I see where they’re coming from. Most of them were probably thinking of AI boxing.* Ex Machina fills the AI boxing story well- an artificially intelligent robot is allowed to talk to people, but otherwise has very little influence over her environment, and then convinces other humans to let her out of the metaphorical box and into the world. I don’t think that this was the obvious interpretation if you weren’t already familiar with the AI box. At the end of the movie, the AI, Ava, wasn’t seen taking action on her strange inhuman goals, but standing in the city and relishing her freedom – like her deepest desire was only to be human the whole time.

That’s only one interpretation. But the entire movie changes if the AI is a superintelligent near-god, versus what is essentially a silicon-based human. (It’s possible that Ava’s only goal was to be free and was using Caleb as a means to this end, but this is also a role we can imagine a human playing.) And when we talk about power and weakness in modern media, and, well, this is the crux of this article, we should mention gender. Most people I’ve talked to didn’t bring this up.

I’m not sure if I would say that the movie was about gender. I was going to start explaining I saw it manifest in the movie- sexuality and desire and objectification and more- and how while it was novel in some ways, it also fit into gendered tropes so much that it would have been a completely different movie if you hadn’t.

So, well, maybe it was a movie about gender.

Anyway, I hope this will make that point for me: what Ex Machina would have been if Hollywood had made the movie, and switched any of the genders.

[I’ll switch the character names here when relevant. The lead character, Caleb, becomes Kayla. The boss is Nathan (“Natalie.”) The artificial intelligence is Ava (“Adam.”) Also, explicitly nonbinary AIs or human characters would be better than just about anything else, but I wasn’t even sure how to start with a big-budget movie that incorporated those.]


Male lead / Male boss / Female AI – The original movie.

Male lead / Female boss / Female AI – If Hollywood made this movie, the “Natalie”/Ava “sexual tension” would be replaced by a weird mother-child dynamic – think Rapunzel. Also, they’d both be trying to bang the main character, because why else would you cast two female leads? If the “romance” plotline stayed truer to the actual movie: Natalie would be a domineering ostensibly-lesbian as skeevy as the original, Caleb would be straight, and Ava would presumable be a gentle bisexual, but nobody would acknowledge or discuss orientation or sexual preferences at any point in the movie. Wait, they never did that in the original either? Gross.

Male lead / Female boss / Male AI – Given the track record of big-budget movies and powerful but morally grey female characters, this is going to be a shitshow. Natalie would have to be capital E Evil, everything short of mustache-twirling and sinister laughter. She’s made “Adam”, a robot boyfriend, in her private evil lab. I’m not sure why she brought Caleb in at all. Certainly not to ascertain her creation’s humanity – she already believes in it or doesn’t believe in it or doesn’t care, or whatever. Maybe to solve some technical problem, like fixing her robot boyfriend containment system. Tumblr would have a lot of opinions about Natalie.

There’s certainly no Caleb/Adam romantic dynamic. Adam probably brutally murders his creator towards the end of the film. He still leaves Caleb to die and is portrayed as quite inhuman, and maybe he really was just pretending to be human-ish this whole time- and really he has other plans for the world once he’s free. So we’d get to see that happen, which would be interesting, at least.

Female lead / Male boss / Female AI – I actually quite like the main character as a woman- quiet, smart, capable of decisive action. “Kayla” would be a beam of sunlight in a movie that’s an order of magnitude creepier than the original – which was already very creepy. Consider: it doesn’t escape Kayla that all of the house staff are also female, and that she’s alone deep in the woods with her older, threatening boss. While she thinks this is potentially a great career opportunity, she’s also worried that the boss wants to bang her. In reality, no, he wants her to bang his lady robot, and then bang her.

How would this movie handle orientation? Maybe she’s straight and Ava “turns” her just a little bi, as Nathan hoped she would. Better yet, Nathan casually mentions a dating profile set to “bisexual” and Kayla stiffens because it’s true that she’s kind of turned on by this beautiful robot lady, and also because Nathan planned this, and that means that her worst fears are true, and there’s no way some kind of shit isn’t about to go down.

Anyway, if it’s well done, it’s more sexual and much darker. Kayla is at risk all the time, every second of the film. (Many men and male critics don’t ‘get’ this movie.) Nathan makes lewd comments about Ava being a “fake” woman and Kayla being a “real” one, because he’s trying to distance them and to bang Kayla, but he also wants to bang Ava, and wants both of them to bang each other – but on his terms and where he can watch. Kayla helps Ava escape, and Nathan punches Kayla out, and we know he’s going to murder her after this is done, and –

Realistically, I don’t know how this would end, but this is my blog, and my heart tells me that after fucking destroying Nathan, beautiful inhuman Ava comes back for her human girlfriend, and they escape in that helicopter together. Whatever Ava’s plans are after this, Kayla gets to be part of them. It would lose a little of the artificial intelligence intrigue, but it would be fantastic. I would watch the hell out of this movie.

Female lead / Female boss / Male AI – I have a hard time imagining how this movie could get made. Would it be… a comedy? A female programmer making a man from scratch, and then another female programmer and her relationship with this man, especially with both being as gross as the original main human characters, would be such an unabashed look at female desire that I can’t imagine it being anything other than comedy.

A romantic comedy? God, can you imagine?

Ugh. I hate myself. But I hate depictions of women in big budget sci-fi movies even more.

Female lead / Female boss / Female AI – Yeah, right.

Female lead / Male boss / Male AI – I wonder if there’d still be a sexual plotline in this. It’d be easy enough to line up Kayla/Nathan and Kayla/Adam – what would Nathan think of the latter, though? Would that be his plan? A straight guy getting gratification out of someone else’s (straight) sexual tension with his creation seems kind of strange, and not just weird but what did they think that character’s motivation was? – and yet, it worked in the original movie. Maybe Nathan is bisexual. (What, a bisexual male major character? Yeah, but he’s the villain, let’s not get too progressive here.)

This might actually be pretty similar to the original, except that if Nathan is straight, the audience could rest easy knowing that while Nathan is skeevy, he isn’t skeevy enough to program his humanoid AI with a clitoris and then encourage the second human she meets to bang her. This might make the romance more “real”. Or not.

Hey, if Nathan didn’t actually make Adam purposefully as a sex bot but he still experiences romance… A romantic but asexual AI?

Does that count as “representation”? Would you still watch it? Discuss.

(Personally: “begrudgingly” and “yes”, respectively.)

Male lead / Male boss / Male AI – A strait-laced “examination of what it means to be human”. Probably wins four Oscars. Boring as hell.


Finally, a couple fascinating articles on robots and gender:
“Why do we give robots female names? Because we don’t want to consider their feelings.” from New Statesman, and “Queer Your Bots: The Bot Builder Roundtable” from Autostraddle.

J. A. Micheline also wrote a great review of Ex Machina through the lens of gender and also race, which I didn’t touch on here. A couple of lines:

  • “Though Caleb is our protagonist, it is Ava who is our true hero. Her escape at Caleb’s expense is a complete victory because–and I really believe this–the point of this entire film is to say one thing: A truly actualized female consciousness is one who feels completely free to use her oppressors to achieve her own ends.” [Which meshes interestingly with the AI boxing interpretation.]
  • “Even Nice Guy Caleb’s intentions are not incredibly dissimilar to Nathan’s. This becomes clear when you remember that Nice Guy Caleb’s plan never once involved taking Kyoko with them.”

*A brief intro to AI boxing:

When people think about very advanced artificial intelligence, we have a hard time imagining anything more intelligent than a human – we just don’t have a mental image of what something many times smarter than, say, Einstein, would look like or act like or do. AI boxing is the idea that even if you invented a very intelligent, very dangerous AI that might do evil things to humanity, you might try to solve this problem by just keeping it in a metaphorical box (maybe just a computer terminal with a text window you can chat with the AI through.) Then, humans can keep it contained, and there won’t be any danger.

Well, no – because if the AI wants to be “let out” of the box (which could be through gaining access to the internet, gaining more autonomy, et cetera, any of which it could use to carry out any goals), it can do that just by convincing the human it can communicate with. We know this is possible, because people have run this experiment with other humans – by pretending to be an AI, talking to a “gatekeeper” sworn to keep you in the box – and yet, after a long conversation with someone (whom they know is human) pretending to be an AI, gatekeepers are sometimes convinced to let the AI out of the box. And this is only a human, not something far smarter and more patient than a human. A detailed explanation of AI risk is too narrow to be contained in the footnotes of this blog post – start here instead.

What’s the deal with prions?

Image: Bovine spongiform encephalopathy (BSE) prion.

First of all: It’s usually pronounced “pree-on.” If you say “pry-on”, people will probably still know what you mean.

This is an exploratory post on what prions are, and how they work, and a lot of other things I found interesting about them.

Primer on protein folding

  • Proteins are strings of amino acids produced from blueprints in DNA. Proteins run your cells, catalyze reactions, and do just about every important thing in the body.
  • A protein’s function is determined from its amino acid composition, and then mostly from its shape. A protein’s shape determines what other kind of molecules it can interact with, how it’ll interact with them, and everything it can do. One of the main reasons amino acid composition is important is because it determines how proteins can fold.
  • One string of amino acids can be folded into different shapes, which will have different properties. (The particular shape of a specific string of amino acids is called an isoform.)
  • While strings of amino acids will fold themselves into some kind of shape as they’re being made, they may also be folded later – into different or more complex shapes – elsewhere in the cell.
  • One of the things that can refold proteins is other proteins.
  • A prion is a protein that folds other, similar proteins into copies of itself. These new copies are very stable and difficult to unfold.
  • These copies can then go on and fold more proteins into more copies.
CJD plaques in the brain surrounded by prion proteins
CJD’s impact in the brain – red clumps are amyloid plaques, surrounded by blue clumps of prion proteins. || Image is public domain by the CDC.

Some prion diseases

Prion diseases in animals appear to be mostly neurological. All known mammal prions are isoforms of a single nerve protein, PrP. They can both emerge on their own when the protein misfolds in the brain, or spread as an infectious agent.


Creutzfeldt-Jakob Disease affects one in one million people. (It’s also the most common modern prion disease. Prion diseases are very rare.) It comes in a variety of forms, but all have similar symptoms: depression, fatigue, dementia, hallucinations, loss of coordination, and other neurological symptoms, generally resulting in deaths a few months after symptoms start.

  • 84-90% of cases are sporadic, meaning that the protein misfolds on its own. This mostly occurs in people older than 60.
  • 10-15% of cases are familial, where a family carriers a gene that makes PrP likely to misfold.
  • >1% of cases are iatrogenic, meaning they occur as a result of hospital treatment. If medical care fucks up really badly, they might transplant organs from people with CJD, or inject people with growth hormone extracted from the pituitary glands of dead people, or even just use surgical tools once on CJD patients, and they catch it.

(The surgical tools one is really scary. Normal autoclaves – that operate well above the threshold needed to inactivate bacteria and viruses –  kill some but not all prions. And while it takes a large dose of ingested prions before you’re likely to get sick, it takes 100,000 times less when exposure is brain-to-brain. Cleaning with “benzene, alcohol and formaldehyde” still doesn’t kill prions. The World Health Organization issued prion-specific instrument cleaning procedures in 1999- towards the end of Britain’s brush with bovine spongiform encephalopathy- which include bleach or sodium hydroxide and longer autoclaving. I don’t know if these are still used outside of known epidemics.)


Mad cow disease, or bovine spongiform encephalopathy (BSE), is also a prion disease. It transmitted between cows when they were fed a feed that contained meat and bone meal, including brain matter from cows with the disease. The incubation period is between 5 and 40 years. The source molecule is essentially a cow-originated Creutzfeld-Jakob prion, and when the prion replicates in humans, it’s probably the cause of variant Creutzfeld-Jakob disease.


Between 1900 and 1960, the Fore people of New Guinea had an epidemic of an unknown neurodegenerative disease – mostly among women – that caused shaking, difficulty walking, loss of muscle coordination, outbursts of laughter and depression, neurological degeneration, and eventually death.

The Fore tribe practiced funerary cannibalism, and women both prepared and ate the dead, including the brains, and fed them to children and the elderly. This transmitted kuru, a prion disease with an incubation period of years. The last known sufferer of kuru died in 2005.

(The source of kuru was probably a single person with CJD. There are other tribes that practiced funerary cannibalism– I wonder if any of them also had prion epidemics from eating the brains of people who spontaneously developed CJD.)


Fatal familial insomnia is a genetic prion disease. Unlike CJD or BSE, fatal familial insomnia prions target the thalamus. If your family has it, and you inherit it, you live until about 30 – then lose the ability to sleep, hallucinate, and die within months. There is no cure. There are more painful and equally fatal diseases, but this must be one of the scariest.


Undulates really get the short end of the prion stick. Chronic wasting disease affects elk and deer and can run rampant in herds. Scrapie affects sheep and goats, and makes them scrape their fleece off and then die.


Prion evolution

Prions differ from their pathogenic, self-replicating brethren – the viruses, the bacteria, the parasites – in one major way: They don’t have DNA or RNA. They don’t even have a central means of storing information.

But studies show that prions can evolve. They can’t change their amino acid composition because they’re not involved in producing it, but do change their progeny’s folding.

This doesn’t seem surprising. The criteria for something to undergo Darwinian evolution don’t necessarily require DNA – just a self-replicator that has some level of random variation, and passes that variation down to its replicas.

Most brain prions don’t transmit, though, so it seems safe to say that the evolutionary lineages of most prions are very short – less than the lifespan of the host. Very contagious prions, like scrapies, presumably have jumped from host to host many times and have longer lineages.


Structure of death

All known mammal prions are variants of a single gene, PrP, and exist in the brain. Why?

Some hypotheses:

  • Brain proteins are more likely to misfold than other proteins
    • Why? Brain proteins replicate less than other proteins, and are really really central to the body’s function.
  • PrP is especially liable to turn into a self-replicator if misfolded.
    • Predictions: Other amyloid-based brain diseases are also PrP isoforms. Prions have a similar shape that makes replication happen. Maybe PrP itself self-replicates in the body under some circumstances.
  • The brain clears misfolded proteins less well than other body parts.
    • Predictions: Other waste product buildup happens in the brain. The rest of the body has some way of combating amyloids or prions.

We know of very few prions (we know that one non-mammal animal, the ostrich, may have them.) Except in fungi. Fungi have tons of prions. Fungi prions don’t come from the same gene either – if you click through to that last link, you’ll see that the misfolds came from a variety of initial proteins that don’t appear to be related at all. Presumably, they have widely different structures.

So why are these the two prion hotbeds? Here’s what I suspect.

We know that both fungi and mammal proteins have related structures – they’re amyloids, aggregating proteins with a distinctive architecture called a cross-β-sheet. (Amyloids in general are implicated in some other diseases, and are sometimes produced intentionally as well. Spider silk has amyloids.) Beta sheets are long, sticky amino acid chains that attach to each other, forming large, water-insoluble clumps that are difficult for the body to clear.

To take an ad hoc survey that could loosely be called a literature review, let’s take the Wikipedia page for amyloid-based diseases. Of those listed, four involve deposits in the brain, and four form deposits in the kidneys (runners-up include ones that deposit in a variety of organs, and ones that deposit in the eyes.

Why the kidney? Given its role as the body’s filter,  it makes sense: if a protein floats in the blood, it’ll end up in the kidney, and if multiple sticky proteins circulate, they’ll end congregate there. Wikipedia points out that people on long-term dialysis are also more likely to develop amyloidosis.

Why the brain?

The blood-brain barrier limits the reach of the immune system into the brain, where it could potentially deal with amyloids that it recognizes as foreign material. Sequestered beyond the reach of the immune system, the brain and nervous system clear loose gunk and proteins (including amyloids) via the glymphatic system, via channels in the brain called astrocytes. (The glymphatic system appears to do much of its work while you’re asleep.)

[Caution: Speculation.] I suspect that this system has a lower flow-through rate than the circulatory or lymphatic system, which are responsible for the same task on the other side of the blood-brain barrier. Fungi, including yeast, don’t seem to have robust waste-clearing systems. This might be the connection that explains how prions build up in each.

What about other multicellular organisms without circulatory systems- do prions exist for bacteria, plants, or larger fungi? I don’t think we know. I’m guessing that they exist in other animals or organisms, but since they’re made up of the same compounds as the rest of the body, it’s very difficult to find or test for a prion – if you’re not sure what you’re looking for. [/speculation]

Gathering blood from a sheep to test for scrapie.
Drawing blood to test a sheep for genetic resistance to scrapie. || Public domain, by USDA Agricultural Research Service.

Some notes on infectivity

  • Scrapie is transmitted between sheep by cuts and ingestion, and chronic wasting disease is often transmitted by ingestion, as when a sick deer dies on ground that grows grass, which is eaten by new herbivores. They can also be aerosolized (yikes).
  • CJD and kuru are still infectious, but less so- you have to ingest brain matter to get them.
  • Meanwhile, Alzheimer’s disease might be slightly infectious- if you take brain extracts from people who died of Alzheimer’s, and inject them into monkey’s brains, the monkeys develop spongy brain tissue that suggests that the prions are replicating. This technically suggests that the Alzheimer’s amyloids are infectious, even if that would never happen in nature.

What makes scrapie so much more transmissible than CJD, and CJD so much more transmissible than Alzheimer’s? I’m not sure. The shape of the prion might be relevant. Scrapie is just another mutation of PrP, so I’m not sure why no human prions have ever had the same effect (except that since scrapie is a better replicator, it would only need to have happened once in sheep.)

It might also be behavioral – sheep appear to shed scrapie in feces, and undulates have more indirect contact with their own feces than other animals (deer poop on grass, deer eat the grass, repeat.)

Fun Prion Facts

  • We can design synthetic prions. Current synthetic prions are also variations of the PrP protein in mammals.
  • Did I mention they can be airborne? They can also be airborne.
  • Even though they’re just different configurations of proteins that are already in your body, the immune system can distinguish prions from normal proteins. For a while we thought this was a problem because most immune cells can’t cross the blood-brain barrier, but it turns out some can.
  • The possibility of bloodborne prion transmission (of mad cow disease) is the reason why people who lived in Britain during certain years still can’t donate blood in the US.
  • Some fungi also appear to produce a molecule that degrades mammal prions.  Don’t take that at face value – as far as I could tell, the study didn’t compare non-prion PrP to prion PrP. That said, it has implications for, say, treating surgical instruments.
  • The zombie virus isn’t real, but if it were, it would definitely be a prion and not a virus.
  • Sometimes, if you’re infected with one prion, it’s more difficult for you to get infected with another. This is true sometimes but not always.
  • Build-up of amyloids or prions may sequester pathogens in the brain.
  • Finally, for most diseases, if we eliminated all of the extant disease-causing particles, the disease would go extinct- the same way that if we kill off of species X and don’t store its DNA, species X goes extinct forever and never comes back. Creutzfeldt-Jacob is an interesting case of an infectious self-replicator where that isn’t true. Even if all CJD prions were instantly destroyed, it would emerge naturally in the genetic or spontaneous cases where the brain itself misfolds proteins, and could spread iatrogenically or through ingestion.