Most surveys that discuss the matter almost certainly misrepresent the asexual population in one way or another. Fortunately, if you’re creating a survey, or interpreting results from a previously-conducted survey, there are ways to make your results more accurate!
This post is based on my previous post about asexuality, which contains more detailed sources and reasons why I think this topic is important. The idea of this post can probably also be applied to representing diverse sexual preferences or even gender identities (e.g. allow varied responses, don’t make assumptions), but the specific suggestions are targeted towards asexuality. Feel free to share this with people who are designing surveys.
Remember that asexuality and aromanticism exist
If your survey touches in any way on romance, sexuality, relationships, or related behaviors, the most important thing is to know and account for the fact that asexuality exists at all.
The basics: Asexuality is an umbrella term for people who don’t experience sexual attraction. 1-8% of people are or could be called asexual (more info here). Asexual people aren’t an easily-dismissed minority, and they are in your sample demographic. (Probably.) Aromanticism, similarly, is not having romantic interest. We don’t know how many aromantics there are, but they’re certainly out there. People may be aromantic and asexual, or either one, or neither. Some people consider asexuality and aromanticism to fall under the LGBTQ demographic, some people don’t. (The extended LGBTQIA+ acronym does include asexuals – that’s what the ‘a’ is supposed to stand for.) More information can be found here.
In representing asexual people in your results, the first question is what you’re using your data for.
My survey is about general identity/demographic information
We might expect 2x-4x as many romantic asexuals as aromantic asexuals (where do these numbers come from?). This is important because people on the asexual or aromantic spectrum have multiple identities – they might be biromantic and gray-asexual, or aromantic and homosexual, or heteroromantic and demisexual. This means that a question like the following is likely to lead to inaccurate answers:
What’s your orientation?
One community survey found that the number of asexuals doubled when asexuality was asked about separately. You could do the same thing:
What’s your orientation?
Are you asexual?
A solution that might be less confusing for people who don’t know what asexuality is, is to allow respondents to check multiple boxes, e.g.:
Check which of the following best describe your sexual/romantic orientation:
It would also be nice (and more accurate) to include some other options:
( ) Gray-asexual ( ) Demisexual ( ) Other
You could also ask about romantic and sexual orientation separately:
What is your sexual orientation?
What is your romantic orientation?
Heteroromantic(attracted to another gender)
Homoromantic (attracted to your same gender)
Bi/panromantic (attracted to all genders)
Aromantic (do not experience romantic attraction)
(Edit, 3/4/17: Siggy points out in the comments that it’s important to include an “other” or write-in response on romantic orientation questions, as well as sexual orientation.)
You could also just have a write-in response:
What is your sexual/romantic orientation? ___________________
You can then bin responses like “straight” and “heterosexual” as meaning the same thing, or, say, “aro-ace” and “gray-asexual lesbian” as both being on the asexual spectrum.
There is a downside in that people don’t necessarily know what “heteroromantic” means right away, even if they are that. (So if you’re going with options with less-familiar words, include definitions.)
Weed out troll answers with a lizardman question
The problem with more questions or write-ins is that those open up options to troll or confusing responses, perhaps from people who disagree with the basis of the question, or don’t understand.
Since people who troll on a gender or orientation question are likely to troll on other parts of the quiz, you could throw in a lizardman question – an absurd question designed to weed out troll respondents (or at least calibrate the honesty of participants).
In middle school, we got drug use surveys that asked us to check if we had ever done marijuana, heroin, hallucinogens, amphetamines, perscription drugs, inhalants, or derbisol (also known as DB, dirt, wagon wheels, or hope.) We asked the health teacher what “derbisol” was after the test, and she looked it up, and derbisol isn’t real – it’s a lizardman answer. (Apparently, 18.2% of high-schoolers in some groups have claimed to use derbisol. Remember: if you don’t talk to your kids about wagon wheels, bloggers will.)
The point is that you can adapt a lizardman question to a variety of contexts.
My survey is about sexual/romantic/relationship behavior
The keys here are A) remember that asexuality and aromanticism exist, and B) ask about behavior or preferences rather than making assumptions.
Many asexual people date people.
Some asexuals sometimes have sex.
Some people who don’t identify as asexual still don’t want to have sex for whatever reason.
Someone who’s gray-asexual may normally round themselves off as “asexual” on surveys, but have experienced sexual attraction before.
Some people are asexual but don’t know it.
Asexual people may or may not identify as queer.
So if your question is about, say, attitudes from people who have or want to have sex with women, don’t ask if they’re heterosexual/bisexual men or homosexual/bisexual women. Instead, ask if your respondent has or wants to have sex with women.
Same goes for relationships.
The Asexual Identification Scale is 12 questions about behavior and preferences that capture 90% of asexual people, and can also identify asexual people who don’t realize they’re asexual. If you’re curious specifically about asexual-type behaviors, this may be your answer.
My survey is gathering data for both demographics and behaviors
State what you’re using the data for. For instance, if you have one question to ask college students about their orientation and who they’re likely to date, state that your study is about dating preferences.
You won’t get a complete picture of people’s orientations, but you weren’t going to anyways with one multiple-choice question. And people with complicated identities (like “biromantic asexual”) are more likely to write in the part that represents who they’re planning to date, not have sex with. If you’re using the response to gather information about STD risk, make it clear that your question is about sexual activity. (And then clarify what “sexual activities” you’re talking about, since people define that differently too and it’s probably relevant to STD risk. Specificity counts!)
2. Avoid over-generalizing from your results. If you’re using data from a question like the first one (“pick one: homosexual, heterosexual, bi/pansexual, or asexual”), realize that your answers for who dates or has sex with whom are necessarily fuzzy, because your results are representing asexuals and aromantics poorly.
Edited and updated as of 3/4/17. TL;DR: The popularly cited figure of 1% of the population being asexual is probably wrong, and the true fraction is probably higher. Determining this is hampered by the fact that asexuality is an umbrella term, and that most people still don’t know what asexuality is.
Content warning: discussion of sex.
An asexual is someone who does not experience sexual attraction. Unlike celibacy, which people choose, asexuality is an intrinsic part of who we are. Asexuality does not make our lives any worse or any better, we just face a different set of challenges than most sexual people. There is considerable diversity among the asexual community; each asexual person experiences things like relationships, attraction, and arousal somewhat differently. Asexuality is just beginning to be the subject of scientific research. [Asexual Visibility and Education Network, Overview]
Popular literature says that 1% of the population is asexual. Is this true?
The 1% figure comes from Anthony F. Bogaert, who published a 2004 survey of 18,876 adults in British households, who responded to the following survey:
I have felt sexually attracted to…
Only females, never to males
More often to females, and at least once to a male
About equally often to males and females
More often to males, and at least once to a female
Only males, never to females
I have never felt sexually attracted to anyone at all.
In the follow-up questions, they were asked: “How old were you when you first had any type of experience of a sexual kind – for example, kissing, cuddling, petting – with someone of the opposite sex?” (As well as for the same sex.) I believe that subjects saw that question after having answered the first one.
1% of respondents answered with (6), and this is where the “1% of people are asexual” number comes from.
Why am I not content with this number?
First of all, people sometimes frame this as “1% of the population identifies as asexual.” We have no idea if this is true.
That said, aside from being over a decade old, the question itself likely doesn’t represent the asexual (or potentially asexual) population. Many asexuals do experience sexual attraction, but extremely rarely (depending on how rarely, these people might consider themselves gray-asexual, or just functionally asexual with maybe a handful of exceptions.) Many asexuals experience romantic or aesthetic attraction, and it can be very difficult to distinguish between romantic and aesthetic and sexual attraction if your culture doesn’t give you the affordance for doing so. So even if the households surveyed represent the general population, I would still expect the 1% number to under-represent how many asexual people there are.
As a counter-point, it’s also certainly possible to have a low libido or low sexual interest as a result of a physical health-, mental health-, or something else-related, reason. While some people with this camp might permanently identify as asexual, others don’t, or might only realize that their low sexual interest isn’t as innate as they thought once they get treatment, or improve other parts of their lives. Late bloomers might also develop sexual attraction far after their peers. So we’d expect there to be some people who identify as asexual, but actually aren’t (or wouldn’t in better circumstances), as well.
Note that in a later piece, Bogaert says that as awareness of asexuality grows, he doesn’t know what the true number is either.
Kinsey’s “Group X”
Alfred Kinsey, the founder of modern sexology, created his famous 1-6 “Kinsey Scale” of heterosexuality to homosexuality. He also included “Group X”, a group he found to have “no socio-sexual contacts or reactions.” I haven’t read Kinsey’s work myself, but the Asexuality Wiki quotes from Sexual Behavior of the Human Female that:
% in “Group X”
Previously married females
Previously married males
14-19% of unmarried women! (Obviously, there’s a selection effect here – if you’re asexual-aromantic, you’re much less likely to get married.) I don’t know how much of the population at the time fell into the married/unmarried/previously-married categories. Also, the data is old, and Kinsey may not have distinguished romantic and sexual attraction.
That said, it’s safe to say that if Kinsey’s population was anywhere near representative, >1% of respondents fell into “category X”. If the “married” and “unmarried” people were each 25% of the total, we’re talking 5.5-7.8% of people in “Group X”.
A 2014 survey pf University of California system campuses, with respondents composed roughly evenly of both students and faculty/staff (~80,000 total), asked readers to check which term best described their sexual orientation, out of: asexual, bisexual, gay, heterosexual, lesbian, queer, questioning, or other (“please specify.”) 4.6% of respondents responded with “asexual”. While limited to college students and faculty, and not the question I would have asked, this still had by far the greatest sample size of any survey discussed here.
A 1983 study by Paula S. Nurius of 689 mostly university students from New York, Kansas, Wisconsin, Hawaii, and California found 10% of women and 5% of men were “asexual”, having on low scores of both homosexual and heterosexual preferences according to the Sexual Activity and Preference Scale (SAPS). (They also found that this group was the most likely to be depressed and have self-esteem problems. 😦 ) I can’t find a copy of the SAPS online, so I don’t know if it clearly distinguishes romantic and sexual preferences, or doesn’t as in Bogaert’s metric, but it’s still already higher than 1% at least in the college population.
Two reasons why surveying for asexuality is hard
1) Asexuality is an umbrella term
The asexual label includes a lot of different identities. Most people think of “intimate relationships” as a single concept, the point you happen to be at on the relationship escalator.
But actually they’re two things (tumblr calls this the “split attraction model”):
Although, even more realistically, it’s more of a smorgasbord:
A great deal of people are either aromantic but sexual, or asexual but romantic. This means that they’re grabbing some of the dishes from the “romance” side and almost none from the “sexual” side, or vice versa. I think this is tough because the romantic/sexual distinction isn’t very clear if you haven’t thought about it before. Our culture describes cuddling as foreplay, and that all relationships move on an escalator from dating to sex to marriage to babies. But while Clickhole tells us that kissing is sex, many aces will clarify that there is, in fact, a big difference between all of these things.
This is complicated in many cases:
Later in the above survey, when Bogaert calls kissing, cuddling, and sexual intercourse all “experiences of a sexual kind.” (To be fair, this could also be due to a shift in the meaning of the word “sexual”.)
When a survey asks if the reader is “heterosexual, homosexual, bisexual, or asexual”, and that’s the only question about sexual or romantic orientation. (Which one does a romantic ace choose? Does the survey-maker know about the split-attraction model? Aren’t their meaningful relationships more important than their hypothetical sex life?)
When a survey asks about one kind of behavior, like not having a libido, and assumes that this means the same thing as asexual.
Asexuality is usually assumed to be a personal preference, but if you’re raised in a culture that says Not Having Sex Is Good And Virtuous, you’re more likely to think that you don’t want sex if you also think of yourself as Good And Virtuous.
2) Most people don’t know what asexuality is
By way of analogy, let’s imagine a world where food intolerances exist but are completely unknown and unaddressed. If you eat a sandwich every day for lunch and get sick every afternoon, you’re probably going to… keep eating sandwiches. You’re not going to think “hey, maybe I’m gluten intolerant,” or even, “maybe I should try eating something different for lunch?” You might notice that on a day you skip lunch, you don’t feel sick, but you’re going to keep eating sandwiches – a sandwich is as good a lunch as any, and you don’t have a framework for any counter-evidence.
This is hermeneutic injustice, also known as “why didn’t anyone tell me that there was a word for that?” In our world, asexuality is, of course, a known concept – but it’s not well known or accepted enough for everybody to be able to know if they are or aren’t that.
There’s a great deal of societal pressure to date, get married, have sex, etc., and people might do these without really wanting to. And then say “well, of course I have a sex drive, I mean, I’m married, after all,” never noticing that their beliefs are circular.
The typical mind fallacy means ace people may assume that everybody thinks the same way they do about sex or romance, and date people anyways.
People may have heard the word “asexual”, but not understand that, e.g., you can be romantic and asexual, or that it’s not only a trauma response or medical symptom, or that you can have a high libido and be asexual.
If you haven’t heard of people who are happy not having sex, you probably haven’t considered that you might be happy not having sex.
This diagnostic criterion from the 2013 DSM-5 for Female Sexual Interest/Arousal disorder (also present for male hypoactive sexual disorder.) While it’s nice that asexual identity is no longer pathologized the same way it was so very long ago as 2012, it’s also a useless exception if people don’t know what asexuality is.
I subscribe to Ozy’s view of identity labels as being about communicating preferences. So the question I’d like to be able answer is “how many people are there who either identify as asesxual, or would find it useful given the societal leeway to do so?”
Alternatively, asexuality is also commonly described as “lack of sexual attraction to other people”. There’s certainly room for nuance there, but it’s still pretty useful.
Because of the above, I’d like to explore options other than asking “are you asexual?” and scaling up.
One thing that seems to work well in sociology is asking about behavior, not self-identifiers. For instance, asking men if they are gay will get you one answer. Asking men if they sometimes have sex with other men gets a larger pool – perhaps they’re closeted, bisexual, used to identify as gay, or regularly have ‘bud sex’ with their guy pals but identify as straight. Self identification is hard, behavior is a little more straightforward.
Because asexuality is an umbrella term, asking about behavior is difficult. The Asexuality Identification Scale (AIS) seems to be our answer – researchers came up with a bunch of question about attitudes towards sex (EG: “My ideal relationship would not involve sexual activity, 0 (disagree strongly) – 5 (agree strongly)”), gave them to ace-identified participants from the Asexual Visibility and Education Network, and chose the most predictive ones. 93% of these subjects got a score above 40 on the resulting 12-item questionnaire (the questions are available here.)
This seems pretty good to me. People who hang around AVEN aren’t necessarily representative of the larger ace population, and they’re selected for knowing that they’re asexual, but it is a popular central message board for ace people, and acknowledges (and presumably contains) ace people of a variety of different stripes. So I feel reasonably comfortable saying that if respondents answer honestly, this scale will catch most (~90%) of the ace or potentially ace people.
(Answering the questions honestly is tricky, though. I suspect that many people who are asexual but haven’t realized it yet will lean towards the sexual end of the test scores, and won’t after realizing it. I suppose the way to test this is to ask an enormous number of people to take the test, then have them do it again five years later, and see if any of them have started identifying as asexual in the meantime.)
In coming up with the questions for this quiz, they compared the 176 asexual participants to 716 non-ace participants, recruited off of Craigslist / psychology research websites / their university study pool. Of those, 4% scored above 40.
This could be read as the false positive of the test. I would like to offer a counter proposal: This is closer to the baseline rate of asexuality in the general population. Unfortunately, it doesn’t seem like anybody else has given this survey to a large random sample group.
One reason this might not represent reality is because it seems possible that advertising for the study mentioned that it was a study on sexual behaviors, and I imagine a lot of people closer to the asexual end of the smorgasbord would say “nope, not really my area of expertise” and move on, leaving them under-represented in the quiz. Alternatively, maybe they’d think “well, I appear to have a different relationship with sex than other people I know, so I should take this survey”, and they’d be over-represented. I don’t know how the survey was presented.
Designing better surveys
If you’re making a survey and really only care about people’s self-identified sexuality, you can still do better than most people!
Clearly distinguishing romance and sex
The 2014 AVEN community survey found that only 20% of asexual-identified survey respondents also identified as aromantic. If the survey respondents (mostly people from tumblr, as well as regular AVEN community members) is representative, this might indicate that 4 out of 5 asexuals do have romantic interests.
There isn’t a website like AVEN specifically about aromanticism, and sexual aromantics seem unlikely to be on an asexuality website. So it’s also hard to say what percent of aromantic people are also asexual.
Better accounting for asexuals in survey questions
A website I follow did a participant demographic survey in 2014, then another participant demographic survey in 2016. Most of the questions were the same between the two, but at least one changed: In 2014, the survey asked what the reader’s sexual orientation was. The results were:
Asexual: 59, 3.9%
Bisexual: 216, 14.4%
Heterosexual: 1133, 75.4%
Homosexual: 47, 3.1%
Other: 35, 2.3%
In tallying the data, Scott Alexander wrote: “[This question was poorly worded and should have acknowledged that people can both be asexual and have a specific orientation; as a result it probably vastly undercounted our asexual readers]”
In the 2016 survey, it asked about orientation. Then it asked if the reader was asexual.
Heterosexual: -5.000% 1640 70.400%
Homosexual: +1.300% 103 4.400%
Bisexual: +4.000% 428 18.400%
Other: +3.880% 144 6.180%
(The +/- represent changes from the 2014 survey.)
Are you asexual?
Yes: 171 7.4%
No: 2129 92.6%
I don’t think that we can necessarily assume that 7.4% of the general population is asexual, generalizing from the self-selected readers of one website. I do, however, want to draw your attention to the fact that reported rates of asexuality nearly doubled when the question was asked separately! I think this should be common practice in surveys collecting data on sexual orientation.
It could even be improved upon – someone who’s both gay and asexual might call themselves homoromantic, but not homosexual – but I think it’s a good starting ground. Our hypothetical respondent is more likely to choose “homosexual” on the grounds that it’s partially right, and that their asexuality will be acknowledged in the next question.
A “check all that apply” box is another possible solution.
% of population that is asexual
From 2004, didn’t distinguish romantic/sexual orientation.
From 1953, may not have distinguished romantic/sexual orientation. Not necessarily random.
~5.5-7.8% (1.5-2% of men, 9.5-13.5% of women)
From 1983, may not have distinguished romantic/sexual orientation. Of mostly college students.
7.5% (5% of men, 10% of women)
Asexual Identification Score control group
Not necessarily completely representative.
University of California system survey
Of college students and staff/faculty.
(While the LessWrong reader survey was non-random (with a decidedly young/liberal/male/white/tech-y bent), at 7.4% self-identified asexual readers, it’s interesting to note that it lines up with some of the higher bound estimates for the general population.)
I feel pretty comfortable saying that 1-8% of the population is asexual, maybe closer to 4-8%. If I had a lot of money to put toward this right now, I would have some college students give the Asexual Identification Scale to a large random sample of people, and be more confident in their answer as the correct one.
Check back soon for practical ideas on making your data collection more representative of asexual populations.
Sometimes, the more I know about a topic, the less skeptical I am about new things in that field. I’m expecting them to be weird.
One category is deep sea animals. I’ve been learning about them for a long time, and when I started, nearly anything could blow my mind. I’d look up sources all the time because they all sounded fake. Even finding a source, I’d be skeptical. There’s no reason for anyone to photoshop that many pictures of that sea slug, sure, but on the other hand, LOOK AT IT.
Nowadays, I’ve seen even more deep sea critters, and I’m much less skeptical. I think you could make up basically any wild thing and I’d believe it. You could say: “NOAA discovered a fish with two tails that only mates on Thursdays.” Or “National Geographic wrote about this deep-sea worm that’s as smart as a dog and fears death.” And I’d be like “yeah, that seems reasonable, I buy it.”
Here’s a test. Five of these animals are real, and three are made up.
A jellyfish that resembles a three-meter-diameter circular bedsheet
A worm that, as an adult, has no DNA.
A worm that branches as it ages, leaving it with one head but hundreds of butts.
A worm with the body plan of a squid.
A sponge evolved to live inside of fish gills.
A sea slug that lives over a huge geographic region, but only in a specific two-meter wide range of depth.
A copepod that’s totally transparent at some angles, and bright blue from others.
A shrimp that shuts its claws so fast it creates a mini sonic boom.
(Answers at bottom of page. Control-F “answers” to jump there.)
Of course, I’m only expecting to be surprised about information in a certain sphere. If you told me that someone found a fish that had a working combustion engine, or spoke German, I’d call bullshit – because those things are clearly outside the realm of zoology.
Still, there’s stuff like this. WHY ARE YOU.
Some other categories where I have this:
Modern American politics
Florida Man stories
Head injury symptoms/aftermath
Places extremophiles live
Note that these aren’t cases where I tend to underapply skepticism – these are cases where, most of the time, not being skeptical works. If people were making up fake Florida Man stories, I’d have to start being skeptical again, but until then, I can rely on reality being stranger than I expect.
What’s the deal? Well, a telling instance of the phenomena, for me, is archaeal viruses.
Some of these viruses are stable and active in 95° C water.
This archaeal virus is shaped like a wine bottle.
This one is shaped like a lemon.
This one appears to have evolved independently and shares no genes with other viruses.
This one builds seven-sided pyramids on the surfaces of cells it infects.
These are really surprising to me because I know a little bit about viruses. If you know next to nothing about viruses, a lemon-shaped virus probably isn’t that mind-blowing. Cells are sphere-shaped, right? A lemon shape isn’t that far from a sphere shape. The ubiquitous spaceship-shaped T4 is more likely to blow your mind.
Similarly, if you were a planet-hopping space alien first visiting earth, and your alien buddy told you about the giant garbage-bag shaped jellyfish, that probably wouldn’t be mind-blowing – for all you know, everything on earth looks like that. All information in that category is new to you, and you don’t have enough context for it to seem weird yet.
At the same time, if I studied archaeal viruses intensely, I’d probably get a sense of the diversity in the field. Some strange stuff like the seven-sided pyramids would still come along as it’s discovered, but most new information would fit into my models.
This suggests that for certain fields, there’s going to be some amount of familiarity where I’m surprised by all sorts of things, but on the tail ends, I either don’t know enough to be surprised – or already know everything that might surprise me. In the middle, I have just enough of a reference class that it frequently gets broken – and I end up concluding that everything is weird.
Whether viruses are alive or not is a silly question. Here’s why.
(I make a handful of specific claims here that I expect are not universally agreed upon. In the spirit of tagging claims and also as a TL;DR, I’ll list them.)
Whether things are alive or not is a categorization issue.
The criteria that living organisms should be made of cells is a bad one, even excluding viruses.
Some viruses process energy.
A virus alone may not process energy, but a virus-infected cell does, and meets all criteria for life.
Viruses are not an edge case in biology, they’re central to it.
The current criteria for life seem to be specifically set up to exclude viruses.
What does it mean to be alive?
Whether viruses are alive is a semantic issue. It isn’t a question about reality, in the same way that “how many viruses are there?” or “do viruses have RNA?” are questions about reality. It’s a definitional question, and whether they fall in the territory of “alive” or not depends on where you draw the borders.
Fortunately, scientists tentatively use a standard set of borders. This is not exactly set in stone, but it’s an outset. In intro biology in college, I learned the following 7 characteristics (here, copied from Wikipedia)*:
Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature
Organization: being structurally composed of one or more cells — the basic units of life
Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life.
Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter.
Adaptation: the ability to change over time in response to the environment. This ability is fundamental to the process of evolution and is determined by the organism’s heredity, diet, and external factors.
Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms.
The simple answer
Viruses meet all of the criteria for living things, except 2) and maybe 3).
The complicated answer
For the complicated answer, let’s go a level deeper.
Simply put, criterion 2) states that living things must be made of cells.
Criterion 3) states that living things must metabolize chemical energy in order to power their processes.
Are viruses made of cells?
Okay, here’s what I’ve got. I think 2) is a bad criterion. I think that criteria for living things should not be restricted to earth *, and therefore not restricted to our phylogenetic history. Cells are a popular structure on earth, but if we go to space and find large friendly aliens that are made of proteins, reproduce, evolve, and have languages, we’re not just going to call them “non-living” because they run on something other than cells. Even if that definition is useful up until that point, we’d change it after we found those aliens, suggesting that it wasn’t a good criterion in the first place either.
(Could large aliens not be made out of cells? Difficult to say – multicellularity has been a really, really popular strategy here on earth, having evolved convergently at least 25 times. But cells as we know them only evolved once or twice. Also, it’s not clear to what degree convergent evolution applies to things outside of our particular evolutionary history, because n=1.)
So no, viruses don’t meet criterion 2), although the importance of criterion 2) is debatable.
Do viruses process energy?
What about criterion 3)? Do viruses process energy? Kind of.
Let’s unpack “processing energy.” Converting one kind of chemical energy to another is pretty generic. In bacteria and eukaryotes, what does that look like?
Go ahead. Enlarge it. Look around. Contemplate going into biochemistry. Here’s where it starts to get complicated.
One of the major energy sources in cells is converting adenosine triphosphate (ATP) into adenosine diphosphate (ADP). This transformation powers so much cellular processes in all different organisms that it’s called the currency of life.
Bacteriophage T4 encodes an ATP→ADP-powered motor. It’s used during the virus’ reproduction, to package DNA inside nascent virus heads.
Some viruses of marine cyanobacteria encode various parts of the electron transport chain, the series of motors that pump protons across membranes and create a gradient that results in the synthesis of ATP. They encode these as a sort of improvement on the ones already present in the hosts.
Do those viruses process chemical energy? Yes. If you’re not convinced, ask yourself: Is there some other pathway you’d need to see before you consider a virus to encode a metabolism? If so, are you absolutely certain that we will never find such a virus? I don’t think I would be.
Wait, you may say. Sure, the viruses encode those and do those when infecting a host. But the viruses themselves don’t do them.
To which I would respond: A pathogenic bacterial spore is, basically, metabolically inert. If it nestles into a warm, nutrient-rich host, it blossoms into life. Our understanding of living things includes a lot of affordance for stasis.
By the same token, a virus is a spore in stasis. A virus-infected cell meets all the criteria of life.
(I think I heard this idea from Lindsay Black’s talk at the 2015 Evergreen Bacteriophage meeting, but I might be misremembering. The scientists there seemed very on-board with the idea, and they certainly have another incentive to claim that their subjects are alive, which is that studying living things sounds cooler than studying non-living things – but I think the point is still sound.)
Do we really want only some viruses count as alive?
To summarize, cells infected by T4 or some marine cyanophages – and probably other viruses – meets all of the criteria of life.
It seems ridiculous to include only those viruses in the domain of ‘life’, and not others that don’t include those chemical processes. Viruses have phylogeny. Separating off some viruses that are alive and some that aren’t is pruning branches off of the the evolutionary tree. We want a category of life that carves nature at its joints, and picking only some viruses does the opposite of that.
Wait, it gets more complicated. Some researchers have proposed giant viruses as a fourth domain of life (alongside the standard prokaryotes, eukaryotes, and archaea.) You’ll note that it’s giant viruses, and not all the viruses. That’s because viruses probably aren’t monophyletic. Hyperthermophilic crenarchaea phages, in addition to being a great name for your baby, share literally no genes with any other virus. Some other viruses have only extremely distant genetic similarities to others, which may have been swapped in by accident during past infections. This is not terribly surprising – we know that parasites have convergently evolved perhaps thousands of times. But it certainly complicates the issue of where to put viruses in the tree.
Viruses are not just an edge case
When people talk about the criteria of life, they tend to consider viruses as an edge case, a weird outlier. This is misleading.
Worldwide, viruses outnumber cells 10 times over. They’re not an edge case in biology – by number of individuals, or amount of ongoing evolution, they’re most of biology. And it’s rather suspicious that the standard criteria for life seem to be set up to include every DNA-containing evolving organism except for viruses. If we took out criteria 2) and 3), what else would that fold in? Maybe prions? Anything else?
Accepting that ‘life’ is a word that tries to draw out a category in reality, why do we care about that category? When we ask “is something alive?”, here are some questions we might mean instead.
Is something worth moral consideration? (Less than a bacteria, if any.)
Should biologists study something? (A biologist is much more suited to study viruses than a chemist is.)
Does something fit into the tree of life? (Yes.)
If we find something like it on another planet, should we celebrate? (Yes, especially because a parasite has to have a host nearby.)
When we think of viruses – fast moving, promiscuous gene-swappers, picking up genes from both each other and their hosts, polyphyletic, here from the beginning – I think of a parasitic vine weaving around the tree of life. It’s not exactly an answer, but it’s a metaphor that’s closer to the truth.
* Carl Sagan’s definition of life, presented to and accepted by a committee at NASA, is “a self-sustaining chemical system capable of Darwinian evolution.” This nicer, neater definition folds in viruses, prions, and aliens. The 7-point system is the one I was taught in college, though, so I’m writing about that.
TL;DR: Prediction & calibration parties are an exciting way for your EA/rationality group to practice rationality skills and celebrate the new year.
On December 30th, Seattle Rationality had a prediction party. Around 15 people showed up, brought snacks, brewed coffee, and spent several hours making predictions for 2017, and generating confidence levels for those predictions.
This was heavily inspired by Scott Alexander’s yearly predictions. (2014 results, 2015 results, 2016 predictions.) Our move was to turn this into a communal activity, with a few alterations to meet our needs and make it work better in a group.
Each person individually writes a bunch of predictions for the upcoming year. They can be about global events, people’s personal lives, etc.
If you use Scott Alexander’s system, create 5+ predictions for fixed confidence levels (50%, 60%, 70%, 80%, 90%, 95%, etc.)
Save your predictions and put it aside for 12 months.
Open up your predictions and see how everyone did.
To make this work in a group, we recommend the following:
Don’t share your confidence intervals. Avoid anchoring by just not naming how likely or unlikely you think any prediction is.
Do share predictions. Generating 30+ predictions is difficult, and sharing ideas (without confidence levels) makes it way easier to come up with a bunch. We made a shared google doc, and everyone pasted some of their predictions into it.
Make predictions that, in a year, will verifiably have happened or not. (IE, not “the academic year will go well”, which is debatable, but “I will finish the year with a 3.5 GPA or above”.)
It’s convenient to assume that unless stated otherwise predictions that end by the next year (IE, “I will go to the Bay Area” means “I will go to the Bay Area at least once in 2017.”) It’s also fine to make predictions that have other end dates (“I will go to EA Global this summer.”)
Make a bunch of predictions first without thinking too hard about how likely they are, then assign confidence levels. This post details why. You could also generate a group list of predictions, and everyone individually lists their own confidence levels.
This makes a good activity for rationality/EA groups for the following reasons:
Practicing rationality skills:
Making accurate predictions
Using confidence intervals
It’s open to many different knowledge levels. Even if you don’t know a thing about geopolitics, you can still give predictions and confidence intervals about media, sports, or your own life.
More free-form and less intimidating than using a prediction market. You do not have to know about the details of forecasting to try this.
Natural time and recurring activity
You could do this at any point during the year, but doing it at the start of the year seems appropriate for ringing in the new year.
In twelve months, you have an automatic new activity, which is coming back together and checking everybody’s predictions from last year. Then you make a new set of predictions for next year. (If this falls through for some reason, everyone can, of course, still check their predictions on their own.)
Fostering a friendly sense of competitiveness
Everyone wants to have the best calibration, or the lowest Brier score. Everyone wants to have the most accurate predictions!
Brier values and graphs of ‘perfect’ vs. actual scores will give you different information. Alexander writes about the differences between these. Several of us did predictions last year using the Scott Alexander method (bins at fixed probabilities), although this year, everybody seems to have used continuous probabilities. The exact method by which we’ll determine how well-calibrated we were will be left to Seattle Rationality of 2018, but will probably include Brier values AND something to determine calibration.
As I write this, it’s 4:24 PM in 2016, twelve days before the darkest day of the year. The sun has just set, but you’d be hard-pressed to tell behind the heavy layer of marbled gray cloud. There’s a dusting of snow on the lawns and the trees, and clumps on roofs, already melted off the roads by a day of rain. From my window, I can see lights glimmering in Seattle’s International District, and buildings of downtown are starting to glow with flashing reds, neon bands on the Colombia Tower, and soft yellow on a thousand office windows. I’m starting to wonder what to eat for dinner.
It’s the eve before Seattle Effective Altruism’s Secular Solstice, a somewhat magical humanist celebration of our dark universe and the light in it. This year, our theme is global agriculture – our age-old answer to the question of “what are we, as a civilization, collectively going to eat for dinner?” We have not always had good answers to this question.
Civilization, culture, and the super-colony of humanity, the city, started getting really big when agriculture was invented, when we could concentrate a bunch of people in one place and specialize. It wasn’t much specialization, at first. Farmers or hunter-gatherers were the vast majority of the population and the population of Ur, the largest city on earth, was around 65,000 people in 3000 BC. Today, farmers are 40% of the global population, and 2% in the US. In the 1890’s, the city of Shanghai had half a million people. Today, it’s the world’s largest city, with 34 million residents.
What happened in those 120 years, or even the last 5000?
I’m a scientist, so the people I know of are scientists, and science is what’s shaped a lot of our agriculture in the last hundred years. When I think of the legacy of science and global agriculture, of people trying to figure out how we feed everyone, I think of three people, and I’ll talk about them here. I’ll go in chronological order, because it’s the order things go in already.
Fritz Haber, 1868-1934
Haber was raised in a Jewish family in Prussia, but converted to Lutheranism after getting his doctorate in chemistry – possibly to improve his odds of getting high-ranking academic or military careers. At the University of Kulroch in Germany, Haber and his assistant Robert Le Rossignol did the work that won them a Nobel prize: they invented the Haber-Bosch process.
The chemistry of this reaction is pretty simple – it was a fact of chemistry at the time that if you added ammonia to a nickel catalyst, the ammonia decomposed into hydrogen and nitrogen. Haber’s twist was to reverse it – by adding enough hydrogen and nitrogen gas at a high pressure and temperature, the catalyst operates in reverse and combines the two into ammonia. Hydrogen is made from natural gas (CH4, or methane), and nitrogen gas is already 80% of the atmosphere.
Here’s the thing – plants love nitrogen. Nitrogen is, largely, the limiting factor in land plants’ growth – when you see that plants aren’t growing like mad, it’s because they don’t have sufficient nitrogen to make new proteins. When you give a plant nitrogen in a form it can assimilate, like ammonia, it grows like mad. The world’s natural solid ammonia deposits were being stripped away to nothing, applied to crops to feed a growing population.
When Haber invented his process in 1909, ammonia became cheap. A tide was turning. The limiting factor of the world’s agriculture was suddenly no longer limiting.
Other tides were turning too. In 1914, Germany went to war, and Haber went to work on chemical weapons.
During peace time a scientist belongs to the World, but during war time he belongs to his country. – Fritz Haber
He studied deploying chlorine gas, thinking that it would shorten the war. Its effect is described as “drowning on dry land”. After its first use on the battlefield, he received a promotion on the same night his wife killed herself. Clara Immerwahr, a fellow chemist, was a pacifist, and had shot herself with Haber’s military pistol. Haber continued his work. Scientists in his employ also eventually invented Zykkon B. First designed as a pesticide, after his death, the gas would be used to murder his extended family (along with many others) in the Nazi gas chambers.
Anti-Jewish sentiment was growing in the last few years of his life. In 1933, he wasn’t allowed through the doors of his institute. The same year, his friend, and fellow German Jewish scientist, Albert Einstein, went to the German Consulate in Belgium and gave them back his passport – renouncing his citizenship of the Nazi-controlled government. Haber left the country, and then died of a heart attack, in the next year.
I don’t know if Fritz Haber’s story has a moral. Einstein wrote about his colleague that “Haber’s life was the tragedy of the German Jew – the tragedy of unrequited love.” Haber was said to ‘make bread from air’ and said to be the father of chemical weapons. He certainly created horrors. What I might take from it more generally is that the future isn’t determined by whether people are good or bad, or altruistic or not, but by what they do, as well as what happens to the work that they do.
Nikolai Vavilov – 1887-1943
We shall go into the pyre, we shall burn… But we shall not abandon our convictions. – Nikolai Vavilov
As a young but wildly talented agronomist in Russia, the director of the Lenin All-Union Academy of Agricultural Sciences for over a decade, the shrewd and charismatic Nikolai Vavilov, wanted to make Russia unprecedented experts in agriculture. He went on a series of trips to travel the globe and retrieve samples. He observed that in certain parts of the world, one would find a much greater variety of a given crop species, with a wider range of characteristics and traits not seen elsewhere. This lead to his breakthrough theory, his Vavilov centers of diversity, that the greatest genetic diversity could be found where a species originated.
What has this told us about agriculture? This morning for breakfast, I had coffee (originally from Ethiopia) with soy milk (soybeans originally from China), toast (wheat from the Middle East) with margarine (soy oil, China, palm oil, West and Southwest Africa), and chickpeas (Central Asia) with black bean sauce (central or possibly South America) and pepper (India). One fairly typical vegan breakfast, seven centers of diversity.
He traveled to twelve Vavilov centers, regions where the world’s food species were originally cultivated. He traveled in remote regions of the world, gathering unique wheat and rye in the Hindu Kush, Spain, and Portugal, teff in Somalia, sugar beet and flax in the Mediterranean, potatoes in Peru, fava beans and pomegranates and hemp in Herat. He was robbed by bandits in Eritrea, and nearly died riding horseback along deep ravines in the Pamirs. The seeds he gathered were studied carefully back in Russia, tested in fields, and most importantly, cataloged and stored – by gathering a library of genetic diversity, Vavilov knew he was creating a resource that could be used to grow plants that would suit the country’s needs for decades to come. If a pest decimates one crop, you can find a resistant crop and plant it instead. If drought kills your rice, all you need to do is find a drought-tolerant strain of rice. At the Pavlovsk Experimental Research Station, Vavilov was building the world’s first seed bank.
In Afghanistan, he saw wild rye intermingled with wheat in the fields, and used this as evidence of the origin of cultivated rye: that it wasn’t originally grown intentionally the way wheat or barley had been, but that it was a wheat mimic that had slipped into farms and taken advantage of the nurturing protection of human farmers, and had, almost accidentally, become popular food plants at the same time. Other Vavilovian mimics are oats and Camelina sativa.
While he travelled the world and became famous around the burgeoning global scientific community, Russia was changing. Stalin had taken over the government. He was collectivizing the farms of the country, and in the scientific academies, were dismissing staff based on bourgeois origin and increasing the focus on practical importance of work for the good of the people. A former peasant was working his way up through agricultural institutions: Trofim Lysenko, whose claimed that his theory of ‘vernalization’, or adapting winter crops to behave more like summer crops by treating the seeds with heat, would grow impossible quantities of food and solve hunger in Russia. Agricultural science was politicized in a way that it never had been – Mendelian genetics and the existence of chromosomes were seen as unacceptably reactionary and foreign. Instead, a sort of bastardized Lamarckism was popular – aside from being used by Lysenko to justify outrageous promises of future harvests that never quite came in, it said that every organism could improve its own position – a politically popular implication, but one which failed to hold up to experimental evidence.
Vavilov’s requests to leave the country were denied. His fervent Mendelianism and the way he fraternized with Western scientists were deeply suspicious to the ruling party. As his more resistant colleagues were arrested around him, his institute filled up with Lysenkoists, and his work was gutted. Vavilov refused to denounce Darwinism. Crops around Russia were failing under the new farming plans, and people starved as Germany invaded.
Vavilov’s devoted colleagues and students kept up his work. In 1941, the German Army reached the Pavlovsk Experimental Research Station, interested in seizing the valuable samples within – only to find it barren.
Vavilov’s colleagues had taken all 250,000 seeds in the collection by train into Leningrad. There, they hid them in the basement of an art museum and watched them in shifts all throughout the Siege of Leningrad. They saw themselves as protecting Russia’s future in agriculture. When the siege lifted in 1944, twelve of Vavilov’s scientists had starved to death rather than eat the edible seeds they guarded. Vavilov’s collection survived the war.
Gardening has many saints, but few martyrs. – T. Kingfisher
In 1940, Vavilov was arrested, and tortured in prison until he confessed to a variety of crimes against the state that he certainly never committed.
He survived for three years in the gulag. The German army advanced on Russia and terrorized the state. Vavilov, the man who had dreamed of feeding Russia, starved to death in prison in the spring of 1943. His seed bank still exists.
Vavilov’s moral, to me, is this: Science can’t be allowed to become politicized. Whatever the facts are, we have to build our beliefs around them, never the other way around.
Norman Borlaug, 1914-2009
Borlaug was raised on a family farm to Norwegian immigrants in Iowa. He studied crop pests, and had to take regular breaks from his education to work: He worked in the Civilian Conservation Corps during the dustbowl alongside starving men, and for the Forest Service in remote parts of the country. In World War 2, he worked on adhesives and other compounds for the US MIlitary. In 1944, he worked on a project sponsored by the Rockefeller Foundation and the Mexican Ministry of Agriculture to improve Mexico’s wheat yields and stop it from having to import most of its grain. The project faced opposition from local farmers, mostly because wheat rust had been killing their crops. This wasn’t an entirely unique problem – populations were growing globally. Biologist Paul Erlich wrote in 1968, “The battle to feed all of humanity is over … In the 1970s and 1980s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.”
Borlaug realized that by harvesting seeds in one part of the country and quickly moving them to another, the government could take advantage of the country’s two growing seasons and double the harvest.
By breeding many wheat strains together, farmers could make crops resistant to many more diseases.
He spread the use of Haber’s ammonia fertilizers, and bred special semi-dwarf strains of wheat that held up to heavy wheat heads without bending, and grew better in nitrogen fertilizers.
Nine years later, Mexico’s wheat harvest was six times larger than it had been in 1944, and it had enough wheat to export.
Borlaug was sent to India in 1962, and along with Mankombu S. Swaminathan, they did it again. India was at war, dealing with famine and starvation, and was importing necessary grain for survival. They used Borlaug’s strains, and by 1968, were growing so much wheat that the infrastructure couldn’t handle it. Schoolhouses were converted into granaries.
His techniques spread. Wheat yields doubled in Pakistan. Wheat yields in the world’s least developed countries doubled. Borlaug’s colleagues used the same process on rice, and created cultivars that were used all over Asia. Borlaug saw a world devastated by starvation, recognized it for what it was, and treated it as a solvable problem. He took Haber’s mixed legacy and put it to work for humanity. Today, he’s known as the father of the Green Revolution, and his work is estimated to have saved a billion lives.
We would like his life to be a model for making a difference in the lives of others and to bring about efforts to end human misery for all mankind. – Statement from Borlaug’s children following his death
When I think of modern global agriculture, this is who I think of. I’ve been trying to find something connecting Vavilov and the Green Revolution, and haven’t turned up much – although it’s quite conceivable there is, given Vavilov’s inspirational presence and the way he shared his samples throughout the globe. Borlaug’s prize wheat strain that saved those billion lives, Norin 10-Brevor 14, was a cross between Japanese and Washingtonian wheat. Past that, who knows?
One of the organizations protecting crop diversity today is the Consultative Group for International Agricultural Research (CGIAR), which was founded in 1971 by the Rockefeller Foundation as the Green Revolution was in full swing. They operate a variety of research stations worldwide, mostly at Vavilov Centers in the global south where crop diversity is highest. Their mission is to reduce global poverty, improve health, manage natural resources, and increase food security.
They must have been inspired by Vavilov’s conviction that crop diversity is essential for a secure food supply. If a legacy that’s saved literally a billion human lives can be said to have a downside, it’s that diets were probably more diverse before, and now 12 species make up 75% of our food plant supply. Monocultures are fragile, and if conditions change, a single disease is more likely to take out all of a crop.
In 2008, CGIAR brought the first seed samples into the Svalbard Seed Vault – a concrete structure buried in the permafrost. It’s constructed as a refuge against whatever the world might throw. If electricity goes out, the permafrost will keep the seeds cool. If sea levels rise, the vault is built on a hill. The land it’s on is geologically stable and very remote. And it stores 1,500,000 seeds – six times more than Vavilov’s 250,000 – at no cost to countries that use it.
Let it be known: starvation is on its last legs. We have a good thing going here. Still, with global warming and worse things still looming over the shoulder of this tentative victory, let’s give thanks to the movers and shakers of global agriculture for tomorrow: the people ensuring that whatever happens next, we are going to be fed.
We live in a rather pleasant time in history where biotechnology is blossoming, and people in general don’t appear to be using it for weapons. If the rest of human existence can carry on like this, that would be great. In case it doesn’t, we’re going to need back-up strategies.
Here, I investigate some up and coming biological innovations with a lot of potential to help us out here. I kept a guiding question in mind: will biosecurity ever be a solved problem?
If today’s meat humans are ever replaced entirely with uploads or cyborg bodies, biosecurity will be solved then. Up until then, it’s unclear. Parasites have existed since the dawn of life – we’re not aware of any organism that doesn’t have them. When considering engineered diseases and engineered defenses, we’ve left the billions-of-years-old arms race for a newer and faster paced one, and we don’t know where an equilibrium will fall yet. Still, since the arrival of germ theory, our species has found a couple broad-spectrum medicines that have significantly reduced threat from disease: antibiotics and vaccines.
What technologies are emerging now that might fill the same role in the future?
What it is: Viruses that attack and kill bacteria.
What it works against: Bacteria.
How it works: Bacteriophage are bacteria-specific viruses that have been around since, as far as we can tell, the dawn of life. They occur frequently in nature in enormous variety – it’s estimated that for every bacteria on the planet, there are 10 phages. If you get a concentrated stock of bacteriophage specific to a given bacteria, they will precisely target and eliminate that strain, leaving any other bacteria intact. They’re used therapeutically in humans in several countries, and are extremely safe.
Biosecurity applications: It’s hard to imagine even a cleverly engineered bacteria that’s immune to all phage. Maybe if you engineered a bacteria with novel surface proteins, it wouldn’t have phage for a short window at first, but wait a while, and I’m sure they’ll come. No bacteria in nature, as far as we’re aware, is free of phage. Phage have been doing this for a very, very long time. Phage therapy is not approved for wide use in the US, but has been established as being safe and quite effective. A small dose of phage can have powerful impacts on infection.
Current constraints: Lack of research. Very little current precedent for using phage in the US, although this may change as researchers hunt for alternatives to increasingly obsolete antibiotics.
Choosing the correct phage for therapeutics is something of an art form, and phage therapy tends to work better against some kind of infection than others. Also, bacteria will evolve resistance to specific phages over time – but once that happens, you can just find new phages.
What it works against: Viruses. (Specifically, double-stranded RNA, single-stranded RNA, and double-stranded DNA (dsRNA, ssRNA, and dsDNA), which is most human viruses.)
How it works: DsDNA, dsRNA, and ssRNA virus-infected cells each produce long sequences of double-stranded RNA at some point while the virus replicates. Human cells make dsRNA occasionally, but it’s quickly cleaved into little handy little chunks by the enzyme Dicer. These short dsRNAs then go about, influencing translation of DNA to RNA in the cell. (Dicer also cuts up incoming long dsRNA from viruses.)
DRACO is a fusion of several proteins that, in concert, goes a step further than Dicer. It has two crucial components:
P that recognizes/binds viral sequences on dsRNA
P that triggers apoptosis when fused
Biosecurity applications: The viral sequences it recognizes are pretty broad, and presumably, it wouldn’t be hard to generate addition recognition sequences for arbitrary sequences found in any target virus.
Current constraints: Delivering engineered proteins intracellularly is a very new technology. We don’t know how well it works in practice.
DRACO, specifically, is extremely new. It hasn’t actually been tested in humans yet, and may encounter major problems in being scaled up. It may be relatively trivial for viruses to evolve a means of evading DRACO. I’m not sure that it would be trivial for a virus to not use long stretches of dsRNA. It could, however, evolve not to use targeted sequences (less concerning, since new targeting sequences could be used), inactivate some part of the protein (more concerning), or modify its RNA in some way to evade the protein. Even if resistance is unlikely to evolve on its own, it’s possible to engineer resistant viruses.
On a meta level, DRACO’s inventor made headlines when his NIH research grant ran out, and he used a kickstarter to fund his research. Lack of funding could end this research in the cradle. On a more meta level, if other institutions aren’t leaping to fund DRACO research, experts in the field may not see much potential in it.
Programmable RNA vaccines
What it is:RNA-based vaccines that are theoretically creatable from just having the genetic code of a pathogen.
What it works against: Just about anything with protein on its outside (virus, bacteria, parasite, potentially tumors.)
How it works: An RNA sequence is made that codes for some viral, bacterial, or other protein. Once the RNA is inside a cell, the cell translates it and expresses the protein. Since it’s not a standard host protein, the immune system recognizes and attacks it, effectively creating a vaccine for that molecule.
The idea for this technology has been around for 30-odd years, but the MIT team that discovered this were the first to package the RNA in a branched, virus-shaped structure called a dendrimer (which can actually enter and function in the cell.)
Biosecurity applications: Sequencing a pathogen’s genome should be quite cheap and quick once you get a sample of it. An associate professor claims that vaccines could be produced “in only seven days.”
Current constraints: Very new technology. May not actually work in practice like it claims to. Might be expensive to produce a lot of it at once, like you would need for a major outbreak.
What it is: Compounds that are especially effective at destroying viruses at some point in their replication process, and can be taken like other drugs.
What it works against: Viruses
How it works: Conventional antivirals are generally tested and targeted against specific viruses.
The class of drugs called thiazolides, particularly nitazoxanide, is effective against not only a variety of viruses, but a variety of parasites, both helminthic (worms) and protozoan (protists like Cryptosporidum and Giardia.) Thiazolides are effective against bacteria, both gram positive and negative (including tuberculosis and Clostridium difficile). And it’s incredibly safe. This apparent wonderdrug appears to disrupt creation of new viral particles within the infected cell.
There are others, too. For instance, beta-defensin P9 is a promising peptide that appears to be active against a variety of respiratory viruses.
Biosecurity applications: Something that could treat a wide variety of viruses is a powerful tool against possible threats. It doesn’t have to be tailored for a particular virus- you can try it out and go.
Also, using a single compound drastically increases the odds that a virus will evolve resistance. In current antiviral treatments, patients are usually hit with a cocktail of antivirals with different mechanisms of action, to reduce the chance of a virus finding resistance of them.
Space for finding new antivirals seems promising, but they won’t solve viruses any more than antibiotics have solved bacterial infections – which is to say, they might help a lot, but will need careful shepherding and combinations with other tactics to avoid a crisis of resistance. Viruses tend to evolve more quickly than bacteria, so resistance will happen much faster.
What it is: Genetically altering organisms to spread a certain gene ridiculously fast – such as a gene that drives the species to extinction, or renders them unable to carry a certain pathogen.
What it works against: Sexually reproducing organisms, vector-borne diseases (with sexually reproducing vectors.)
Biosecurity applications: Gene drives have been in the news lately, and they’re a very exciting technology – not just for treating some of the most deadly diseases in the world. To see their applications for biosecurity, we have to look beyond standard images of viruses and bacteria. One possible class of bioweapon is a fast-reproducing animal – an insect or even a mouse, possibly genetically altered, which is released into agricultural land as a pest, then decimates food resources and causes famine.
Another is release of pre-infected vectors. This has already been used as a biological weapon, including Japan’s infamous Unit 731, which used hollow shells to disperse fleas carrying the bubonic plague into Chinese villages. Once you have an instance of the pest or vector, you can sequence its genome, create a genetic modification, and insert the modification along with the gene drive sequences. This can either wipe the pest out, or make it unable to carry the disease.
Current constraints: A gene drive hasn’t actually been released into the wild yet. It may be relatively easy for organisms to evolve strategies around the gene drive, or for the gene drive genes to spread somehow. Even once a single gene drive, say, for malaria, has been released, it will probably have been under deep study for safety (both directly on humans, and for not catastrophically altering the environment) in that particular case – the idea of a gene drive released on short notice is, well, a little scary. We’ve never done this before.
Additionally, there’s currently a lot of objection and fears around gene drives in society, and the idea of modifying ecosystems and things that might come into contact with people isn’t popular. Due to the enormous potential good of gene drives, we need to be very careful about avoiding public backlash to them.
Finding the right modification to make an organism unable to carry a pathogen may be complicated and take quite a while.
Gene drives act on the pest’s time, not yours. Depending on the generation time of the organism, it may be quite a while before you can A) grow up enough of the modified organism to productively release, and B), wait while the organism replicates and spreads the modified gene to enough of the population to have an effect.
What it is: Concentrated stocks of antibodies similar to the ones produced in your own body, specific to a given pathogen.
What it works against: Most pathogens, some toxins, cancers.
How it works: Antibodies are proteins produced by B-cells as part of the adaptive immune system. Part of the protein attaches to a specific molecule that identifies a virus, bacteria, toxin, etc.. The rest of the molecule acts as a ‘tag’ – showing other cells in the adaptive immune system that the tagged thing needs dealt with (lysed, phagocytosed, disposed of, etc.)
Biosecurity applications: Antibodies can be found and used therapeutically against a huge variety of things. The response is effectively the same as your body’s, reacting as though you’d been vaccinated against the toxin in question, but it can be successfully administered after exposure.
Current constraints: Currently, while therapeutic antibodies are used in a few cases like snake venom and tumors, they’re extremely expensive. Snake antivenom is taken from the blood serum of cows and horses, while more finicky monoclonal therapeutics are grown in tissue culture. Raising entire animals for small amounts of serum is pricey, as are the nutrients used for tissue culture.
One possible answer is engineering bacteria or yeast to produce antibodies. These could grow antibodies faster, cheaper, and more reliably than cell culture. This is under investigation – E. coli doesn’t have the ability to glycosylate proteins correctly, but that can be added in with genetic engineering, and anyways, yeasts can already do that. The promise of cheap antibody therapy is very exciting, and more basic research in cell biology will get us there faster.
[Content warning: Talking about ways that people automatically gender other people. If this is a tough topic for you, be careful. Also, a caveat that I’m talking descriptively, not prescriptively, about people’s unconscious and instant ways of determining gender, and not A) what they might actually think about someone’s gender, and certainly not B) what anyone’s gender actually is.
Nonetheless, if I got anything wildly or offensively inaccurate, please do let me know.]
When you try and figure out a stranger’s gender, you don’t just use one physical trait – you observe a variety of traits, mentally assign them all evidence weights, compare them to any prior beliefs you might have on the situation, and then – usually – your brain spits out a “man!” or “woman!” This is mostly unconscious and happens in under a second.
This is called “Bayesian reasoning” and it’s really cool that your brain does it automatically. Most people have some male, some female, and some neutral signals going on. ‘Long hair’ is usually a female signal, but if it’s paired with a strong jawline, heavy brows, and a low voice on someone who’s 6’5”, you’ll probably settle on ‘male’. Likewise, ‘wearing a suit’ is usually a pretty good male signal, but if the person is wearing makeup and is working at a hotel where everyone is wearing suits, you’re more likely to think ‘female’.
Then there are people with androgynous gender presentations – the people who you look at and your brain stumbles, or else does spit out an answer, but with doubt. (As a cis but not-particularly-gender-conforming woman, this is people around me all the time.) When people are read as ‘androgynous’, I think they’re doing three possible things:
Strong male and female signals. Think a dress and a beard, or a high-pitched voice and being 6’4” and muscular, or wearing a suit and eyeliner. Genderfuck is an aesthetic that goes for this.
Left: Drag queen Conchita Wurst. Right: Game of Thrones character Brienne of Tarth.
2) No gender signals. Not giving gender cues, or trying to fall in the middle of any that exist on a spectrum. I think of this one as usually involving de-emphasized secondary sex characteristics – flat chest, no facial hair – which might also mean a youthful, neotenous look. Or maybe a voice or hips or height or whatever that’s sort of in the middle. Some (but not all!) androgynous models have something like this going on.
Fashion-wise, every now and then a company that rolls out a gender-neutral clothing line is criticized because all the clothing is baggy, formless, and vaguely masculine. (See comments below on why this may be.) I think these bland aesthetics are going for ‘No Signals’ – baggy clothing conceals secondary sex characteristics, the plain colors call to mind sort of a blank slate.
3) Signals for Something Else. For a trait that would normally signal gender, signal something else entirely. Long hair is for women, short hair is for men, but a green mohawk isn’t either of those. You might speak in a high-pitched voice, or a low-pitched voice, or in falsetto with an accent. Men wear pants, women wear dresses, but nobody wears this:
Pictured: I don’t know what these people are signalling, but it’s sure not a binary gender. [New York Fashion Week, 2015.]
What does this imply?
I’m not sure.
I expect that people who do No Signals get less shit from bigots (harassment, violence, weird looks) than people in the other two categories (Mixed Signals or Signaling Something Else.) I would imagine that bigots are more likely to figure that No Signals people are clearly a binary gender that they just can’t read, whereas Mixed Signals people are perceived as intentionally going against the grain.
This is unfortunate, because if you want to be read as androgynous, it’s way easier to just do Mixed Signals than to conceal secondary sex characteristics in order to do No Signals. (Especially if your secondary sex characteristics happen to be more pronounced.) Fortunately, society in general seems to be moving away from ‘instant gender reads are your real gender’, and towards ‘there are lots of different ways to do gender and gender presentation’.
Signaling Something Else people probably also get harassment and weird looks, but possibly more because they’re non-conforming in ways that don’t have to do with gender.
Male Bias in Gender Interpretation
Also! There is a known trend that suggests that people are more likely to read ambiguous traits as male than female. This is probably because ‘male’ is seen as ‘the default’, because culture. See: non-pet animals, objects other than cars and ships. This seems to have originally come from Kessler & McKenna (1978), and has held up in a few studies. I’m not sure if this rule is completely generalizable, but here’s a few things it might imply:
You may actually have to have more feminine traits than masculine ones to hit the Confusion Zone. For gender-associated traits that go on a spectrum – chest size, voice pitch, some metric of facial shape, etc., it might look like this:
Of course, there are also cases where people think a trait is associated with gender when, really, it’s not. That still might mean something like this:
One conclusion I’ve heard drawn from this: This explains why it’s often harder for trans women to get automatically gendered correctly, than for trans men. A trans woman has to conceal or remove a lot of ‘male’ traits to get read as female. Trans men, meanwhile, don’t have to go as far to hit ‘male’.
Even gender distribution world
Let’s say there are 100 gendered traits (wearing a dress or pants, long or short hair, facial hair or no facial hair, etc.) Now let’s imagine a population where everybody in this population has the “male” or “female” version of each trait assigned independently and randomly. If the male-bias principle generalizes, you’re likely to read more than half of these people are male.
Gender presentation, and thus how you read gender, is deeply rooted in culture! If you see someone in garb from a culture you’re not familiar with, and you can’t tell their gender, it’s quite possible that they’re still doing intentional gender signals – just not in a way you can read.
Even for similar cultures, this might be different. When I was in England, people called me ‘sir’ all the time. This doesn’t happen often in Seattle. I have three theories for why:
People in England have different gendered trait distributions for deciding gender. Maybe in England, just seeing ‘tall’ + ‘short hair’ + ‘wearing a collared shirt’ is enough to tip the scale to ‘man.’
Where I was in England was just more culturally conservative than Seattle, and if I spent more time in, say, small towns in Southern or Midwest US, I’d also be ‘sir’d’ more.
People in England are more likely to say ‘sir’ or ‘m’am’ at all. So if you were to ask a bunch of Seattle and England strangers if I was a man or a woman, the same percent would say ‘man’, but I wouldn’t notice in Seattle.
I think 2 or 3 are more likely, but 1 would be interesting as well.
Ben Hoffman pointed out that this maps to classifications for people who don’t consistently vote for a major political party. Mixed Signals people are like swing voters or nonpartisan voters. No Signals people are political moderates or don’t vote at all. Signaling Something Else people are, like, anarchists. Or Pirate Party members.
The Bayesian Evidence model of gender identification doesn’t only apply when the result is inconclusive – often your brain will, say, match someone as ‘man’, but also observe that they’re doing some non-masculine things.
(The first thing to consider in this case is that your brain may be wrong, and they may not actually be a man at all.)
Anyways, what gender people are and what they signal to the world is more complex than an instantaneous read, and this is an important distinction. For instance, even when people look at me and think ‘woman’, they can tell that I’m not doing standard femininity either.
If you’re trying to cultivate auto-gendering people less often, I suspect that training your subconscious to quickly separate whatever traits from gender would be useful. Finding efficient ways to do this is left as an exercise to the reader.
It’s obviously possible to train your brain to look at someone and mentally assign them a gender other than the instantaneous response. I’ve also heard stories of people looking at people and automatically going “nonbinary”. I suspect that if you grew up in binary-gendered society, as so many of us tragically did, this is a thing you developed later in life. Maybe you learned this as a possible answer to the “confusion on gendering androgynous people” brain-state.
This post has also been published on the Global Risk Research Network, a group blog for discussing risks to humanity. Take a look if you’d like more excellent articles on global catastrophic risk.]
Several times in evolutionary history, the arrival of an innovative new evolutionary strategy has lead to a mass extinction followed by a restructuring of biota and new dominant life forms. This may pose an unlikely but possible global catastrophic risk in the future, in which spontaneous evolutionary strategies (like new biochemical pathways or feeding strategies) become wildly successful, and lead to extreme climate change and die-offs. This is also known as a ‘biotic replacement’ hypothesis of extinction events.
Biotic replacement in past extinctions
Is this still a possible risk?
Risk factors from climate change and synthetic biology
The shape of the risk
Identifying specific causes of mass extinction events may be difficult, especially since mass extinctions tend to be quickly followed by expansion of previously less successful species into new niches. A specific evolutionary advantage might be considered as the cause when either no other major physical disruptions (asteroids, volcanoes, etc) were occurring, or when our record of such events doesn’t totally explain the extinctions.
1. Biotic replacement in past extinctions
There are five canonical major extinction events that have occurred since the evolution of multicellular life. Biotic replacement has been hypothesized as either the major mechanism for two of them: the late Devonian extinction and the Permian-Triassic extinction. I outline these, as well as four other extinction events.
Cyanobacteria became the first microbes to produce oxygen (O2) as a waste product, and began forming colonies 200 million years before the extinction event. O2 was absorbed into dissolved iron or organic matter, and the die-off began when these naturally occurring oxygen sinks became saturated, and toxic oxygen began to fill the atmosphere.
The event was followed by die-offs, massive climate change, and permanent alteration of the earth’s atmosphere, and eventually the rise of the aerobic organisms.
The Ediacaran period was filled with a variety of large, autotrophic, sessile organisms of somewhat unknown heritage, known today mostly by fossil evidence. Recent evidence suggests that one explanation for this is the evolution of animals, able to move quickly and and re-shape ecosystems. This resulted in the extinction of Ediacaran biota, and was followed by the Cambrian explosion in which animal life spread and diversified rapidly.
Both modern plant seeds and modern plant vascular system developed in this period. Land plants grew significantly as a result, now able to more efficiently transport water and nutrients higher – with maximum heights changing from 30 cm to 30 m. Two things would have happened as a result:
The increase in soil content produced more weathering in rocks, which released ionic nutrients into rivers. The nutrient levels would have increased plant growth and then death in oceans, resulting mass anoxia.
Less atmospheric carbon dioxide would have cooled the planet.
96% of marine species, and 70% of land vertebrate species went extinct. 57% of families and 83% of general became extinct.
One hypothesis explaining the Permian-Triassic extinction events posits that an anaerobic methanogenic archaea, Methanosarcina, developed a new metabolic pathway allowing them to metabolize acetate into methane, leading to exponential reproduction and consuming vast amounts of oceanic carbon. Volcanic activity around the same time would have released large amounts of nickel, a crucial but rare cofactor needed for Methanosarcina’s enzymatic pathway.
The evolution of human intelligence and human civilization has lead to mass climate alteration by humans. Another set of adaptations among human society (IE agriculture, use of fossil fuels) could be considered here, but in terms of this hypothesis, the evolution of human intelligence and civilization could be considered to be the driving evolutionary innovation.
Minor extinction events
Any single species that goes extinct due to a new disease can be said to have become extinct due to another organism’s innovative adaptation. These are less well described as “biotic replacement”, because the new pathogen won’t be able to replace its extinct hosts, but it was still an evolutionary event that caused the disease. A new disease may also attack the sole or primary food source of an organism, leading to its extinction indirectly.
2. Is this still a possible risk?
It seems unlikely that all possible disruptive evolutionary strategies have already happened: Disruptive new strategies are rare – while billions of new mutations arise every day, any new gene must meet stringent criteria in order to spread: Is actually expressed, is passed on to progeny, immediately conveys a strong fitness benefit to its bearer, serves any vital function of the old version of the gene, is supported by the organism’s other genes and environment, and the organism isn’t killed by random chance before having the chance to reproduce. For instance, an unusually efficient new metabolic pathway isn’t going to succeed if it’s in a non-reproducing cell, if its byproducts are toxic to the host organism, if its host can’t access the food required for the process, or if its host happens to be born during a drought and starves to death anyways.
Environmental conditions that make a pathway more or less likely to be ridiculously successful, meanwhile, are constantly changing. Given the rareness of ridiculously successful genes, it seems foolhardy to believe that evolution up til now has already picked all low-hanging fruit.
How worried should we be? Probably, not very. The major extinction events listed above seem to be spaced by 100-200 million years, suggesting a 1-in-100,000,000 chance of occurring in any given year. For comparison, NASA estimates that asteroids causing major extinction events strike the earth every 50-100 million years. These threats are possibly on the same orders of magnitude.
(This number requires a few caveats: This is a high estimate, assuming that evolutionary advantages were a major factor in all cases. Also, an advantage that “starts” in one year may take millions of years to alter the biosphere or climate catastrophically. Once in 100 million years is also an average – there’s no reason to believe that disruptive evolutionary events, or asteroid strikes for that matter, occur on regular intervals.)
On a smaller scale, entire species are occasionally wiped out by a single disease. This is more likely to happen when species are already stressed or in decline. Data on how often this happens, or what fraction of extinctions are caused by a novel disease, is hard to find.
3. Risk factors from climate change and synthetic biology
Two risk factors are worth noting which may increase the odds of a biotic replacement event – climate change and synthetic biology.
Historically, a catastrophic evolutionary innovation seems to follow other massive climate disruption, as in the Permian-Triassic extinction explanation that followed volcanic eruptions. A change in conditions may select for innovative new strategies that quickly take over and produce much more disruption than the instigating geological event.
While the specific nature of the next disruptive evolutionary innovation may be nigh-impossible to predict, this suggests that we should give more credence to environment alteration as a threat – via climate change, volcanic eruptions, or asteroids – as changing environments will select for disruptive new alleles (or resurface preserved strategies.) This means that a minor catastrophic event could snowball into a globally catastrophic or existential threat.
The other emerging source of alleles as-of-yet unseen in the environment comes from synthetic biology, as scientists are increasingly capable of combining genes from distinct organisms and designing new molecular pathways. While genes crossing between wildly different organisms is not unheard of in nature, the increased rate at which this is being done in the laboratory, and the fact that an intentional hand is selecting for viability and novelty (rather than natural selection and random chance), both imply some cause for alarm.
A synthetic organism designed for a specific purpose, may disperse from its intended environment and spread widely. This is probably especially a risk for organisms using completely synthetic and novel pathways unlikely to have evolved in nature, rather than previously evolved genes – otherwise, the naturally occurring genes would have probably already seized low-hanging evolutionary fruit and expanded into possible niches.
4. The shape of the risk
How does this risk compare to other existential risks? It is not especially likely to occur, as described in Part 2. The precise shape or cause of the risk is harder to determine than, say, an asteroid strike. Also, as opposed to asteroid strikes or nuclear wars, which have immediate catastrophic effects, evolutionary innovations involve significant time delays.
Historically, two time delays appear to be relevant:
Time for the evolution to become widespread
Presumably, this is quicker in organisms that disperse/reproduce more quickly. EG, this could be fairly quickly for an oceanic bacteria with a quick generation cycle, but slowly for the 180,000 years it took between the first appearance of modern humans, and their eventual spread to the Americas.
Time between the organism’s dispersal and the induction of a catastrophe
EG, during the global oxygen crisis, it took 200 million years from the evolution of the species, to when the possible oxygen sinks filled up, for a crisis to occur. (At least some of this time included the period required for cyanobacteria to diversify and become commonplace.)
During the azolla event, azolla ferns accumulated for 800,000 years causing steady climate change. The modern threat from anthropogenic global warming is much steeper than that.
What are the actual threats to life?
The great oxygenation event and the Permian-Triassic extinction hypothesis involve the dispersal of a microbe that induces rapid, extreme climate change.
Other events such as volcanoes erupting may change the environment such that a new strategy becomes especially successful, as in the Permian-Triassic extinction event.
Faster, stronger, cleverer predation
The Ediacaran extinction event and the Holocene extinction event involved the dispersal of an unprecedentedly capable predator – animals and humans, respectively.
This seems unlikely to be a current risk. The risk from runaway artificial intelligence somewhat resembles this concern.
Death from disease
Any event in which a novel disease causes a species to go extinct has a direct impact. Additionally, a disease might cause one or more major food sources to go extinct (for humans or animals.)
Globalization and global trade has increased the risk of a novel disease spreading worldwide. This also mirrors current concerns over engineered bioweapons.
5. What next?
Disruptive evolutionary innovation is problematic in that there don’t appear to be clear ways of preventing it – evolution has been indiscriminately optimizing away for billions of years, and we don’t appear to be especially able to stop it. Building civilization-sustaining infrastructure that is more robust to a variety of climate change scenarios may increase our odds of surviving such a catastrophe. Additionally, any such disruptive event is likely to happen over a long period of time, meaning that we could likely mitigate or prepare for the worst effects. However, evolutionary innovation hasn’t been explored or studied as an existential risk, and more research is needed to clarify the magnitude of the threat, or which – if any – interventions are possible or reasonable to study now.
Questions for further study:
How common are extinction events due to disruptive evolutionary innovation?
What factors make these evolution events more likely?
How often do species go extinct due to single disease outbreaks?
Can small-scale models help us improve our understanding of the likelihood of global warming inducing “runaway” scenarios involving microbial evolution?
What man-made environmental changes could potentially lead to runaway microbial evolution?
It’s the main way scientists communicate their findings to the world, in some ways making it the carrier of humanity’s entire accumulated knowledge and understanding of the universe.
It’s terrible for two reasons: accessibility and approachability. This first post in this series discussed accessibility: how to find papers that will answer a particular question, or help you explore a particular subject.
This post discusses approachability: how to read a standard scientific journal article.
Scientific papers are written for scientists in whatever field the journal they’re published in caters to. Fortunately, most journal articles are also written in such a way that you can figure out what they’re saying even if you’re a layperson.
(Except for maybe math or organic chemistry synthesis. But if you’re reading about math or organic chemistry as a layperson, you’re in God’s hands now and I can’t help you.)
Okay, so you’ve got your 22-page stack of paper on moose feeding habits, or the effects of bacteriophage on ocean acidification, or gravitational waves, or whatever. What now? There are two cardinal rules of journal articles:
You usually don’t have to read all of it.
Don’t read it page by page.
Journal articles are conveniently broken into sections. (They often use the names given, or close synonyms.) I almost always read them in the following order:
The abstract is the TL;DR of the article, the summary of what the studies found. Conveniently, it’s first. The abstract is very useful for determining if you actually want to read the rest of the article or not. Abstracts often have very dense, technical language, so if you don’t understand what’s going on in the abstract, don’t sweat it.
As a layperson, the introduction is your best friend. It’s designed to bring the reader from only a loose understanding of the field, to “zoom in” to the actual study. It’s supposed to build the context you need to understand the experiment itself. It gives a background to the field, what we already know about the topic at hand, historical context, why the researchers did what they did, and why it’s important. It’ll define terms and acronyms that will be crucial to the rest of the paper.
It may not actually be easy language. At this point, if you encounter a term or concept that’s unfamiliar (and that the researchers don’t describe in the introduction), start looking it up. Just type it into Wikipedia or Google, and if what you get seems to be relevant, that’s probably it.
In a novel, skipping to the end to see how the suspense plays out is considered “bad form” and “not the point.” When reading papers, it’s a sanity-saving measure. In this part of the paper, the researchers write about what conclusions they’re drawing from their studies,and its implications. This is also done in fairly broad strokes that put it in context of the rest of scientific understanding.
Next, go to the figures that are strewn around the results section, just before the conclusions. (Some papers don’t have figures – in that case, just read the results.) Figures will give you a good sense of the actual results of the experiments. Also read the captions – captions on figures are designed to be somewhat stand-alone, as in that you don’t have to read everything else in the paper to tell what’s going on in the figures.
Depending on your paper, you might also get actual pictures of the subject that illustrate some result. Definitely look at these. Figure out what you’re looking at and what the pictures are supposed to be telling you. Google anything you don’t understand, including how the images were obtained if it’s relevant.
In trying to interpret figures, look at the labels and axes – what’s being compared, and what they’re being measured by. Lots of graphs include measurements taken over time, but not all. Some figures include error measurements – each data point on a graph might have been the average of several different data points in individual experiments, and error measures how different those data points were from each other. A large percent error (or error bar, or number of standard deviations, etc) means the original data points were far apart from each other, small error means that they were all close to the average value. If you see a type of graph that you’re not sure how to read, Google it.
The section that contains figures also contains written information about the researchers actually observed in the experiments they ran. They also usually include statistics, IE, how statistically significant a given result is in the context of the study. The results are what the conclusions were interpreting. They may also describe results or observations that didn’t show up in figures.
Methods are the machinery of the paper – the nuts-and-bolts, nitty-gritty of how the experiments were done, what was combine, where the samples came from, how it was quantified. It’s critical to science because it’s the instructions for how other researchers can check what you did and see if they can replicate the results – but I’d also rather read Youtube comments on political debates than read methods all day. I’ll read the methods section under the following circumstances:
I’m curious about how the study was done. (You do sometimes get good stuff, like in this study where they anesthetized snakes and slid them down ramps, then compared them to snakes who slid down ramps while wearing little snake socks to compare scale friction.)
I think the methodology might have been flawed.
I’m trying to do a similar experiment myself.
Papers cite their sources throughout the paper, especially in the introduction. If I want to know where a particular fact came from, I’ll look at the citation in the works cited section, and look up that paper.
Acknowledgement/Conflicts of Interest
Science is objective, but humans aren’t. If your paper on “how dairy cows are super happy on farms” was sponsored by the American Dairy Association and Dairy Council, consider that the researchers would be very biased to come to a particular conclusion and keep receiving funding. If the researchers were employed by the American Dairy Association and Dairy Council, I’d be very tempted to just throw out the study.