I spent a memorable college summer – and much of the next quarter – trying to run a particular experiment involving infecting cultured tissue cells with bacteria and bacteriophage. The experiment itself was pretty interesting, and I thought the underpinnings were both useful and exciting. To prepare, all I had to do was manage to get some tissue culture up and running. Nobody else at the college was doing tissue culture, and the only lab technician who had experience with it was out that summer.
No matter, right? We had equipment, and a little money for supplies, and some frozen cell lines to thaw. Even though neither I, nor the student helping me, nor my professor, had done tissue culture before, we had the internet, and even some additional help once a week from a student who did tissue culture professionally. Labs all around the world do tissue culture every day, and have for decades. Cakewalk.
Five months later, the entire project had basically stalled. The tissue cells were growing slower and slower, we hadn’t been able to successfully use them for experiments, our frozen backup stocks were rapidly dwindling and of questionable quality, and I was out of ideas on how to troubleshoot any of the myriad things that could have been going wrong. Was it the media? The cells? The environment? Was something contaminated? If so, what? Was the temperature wrong? The timing? I threw up my hands and went back to the phage lab downstairs, mentally retiring to a life of growing E. coli at slightly above room temperature.
It was especially frustrating, because this was just tissue culture. It’s a fundamental of modern biology. It’s not an unsolved problem. It was just benchwork being hard to figure out without hands-on expertise. All I can say if any disgruntled lone wolves trying to start bioterrorism programs in their basements were also between the third PDF from 1970 about freezing cells with a minimal setup and losing their fourth batch of cells because they gently tapped the container until it was cloudy but not cloudy enough, it’d be completely predictable if they gave up their evil plans right there and started volunteering in soup kitchens instead.
This is the memory I kept coming back to when reading Barriers to Bioweapons: The Challenges of Expertise and Organization for Weapons Development, by Sonia Ben Ouagrham-Gormley. I originally found her work on the Bulletin of Atomic Scientists’ website, which was a compelling selling point even before I read anything. She had written a book that contradicted one of my long-held impressions about bioweapons – that they’re comparatively cheap and easy to develop.
It was obscure enough that it wasn’t at the library, but at the low cost of ending up on every watchlist ever, I got it from Amazon and can ultimately recommend it. I think it’s a well-researched and interesting contrary opinion to common intuitions about biological weapons, which changed my mind about some of those.
I’ve written before:
For all the attention drawn by biological weapons, they are, for now, rare. […] This should paint the picture of an uneasy world. It certainly does to me. If you buy arguments about why risk from bioweapons is important to consider, given that they kill far fewer people than many other threats, then this also suggests that we’re in an unusually fortunate place right now – one where the threat is deep and getting deeper, but nobody is actively under attack.
Barriers to Bioweapons argues that actually, we’re not all living on borrowed time – that there are real organizational and expertise challenges to successfully creating bioweapons. She then discusses specific historical programs, and their implications for biosecurity in the future.
The importance of knowledge transfer
The first part of the book discusses in detail how tacit knowledge spreads, and how scientific progress is actually accomplished in an organization. I was fascinated by how much research exists here, for science especially – I could imagine finding some of this content in a very evidence-driven book on managing businesses, but I wouldn’t have thought I could find the same for, e.g., how switching locations tends to make research much harder to replicate because available equipment and supplies have changed just slightly, or that researchers at Harvard Medical School publish better, more-frequently-cited articles when they and their co-authors work in the same building.
Basically, this book claims – and I’m inclined to agree – that spreading knowledge about specific techniques is really, really hard. What makes a particular thing work is often a series of unusual tricks, the result of trial and error, that never makes it into the ‘methods’ of a journal. (The hashtag #OverlyHonestMethods describes this better than I could.)
All of that tacit knowledge is promoted by organizational structures and stored in people, so the movement and interaction of people is crucial in sharing knowledge. Huge problems arise when that knowledge is lost. The book describes the Department of Energy replacing nuclear weapons parts in the late 1990s, and realizing that they no longer knew how to make a particular foam crucial to thermonuclear warheads, that their documentation for the foam’s production was insufficient, and that anyone who had done it before was long retired. They had to spend nine years and 70 million dollars inventing a substitute for a single component.
Every now and then when reading this, I was tempted to think “Oh come on, it can’t be that hard.” And then I remembered tissue culture.
The thing that went wrong that summer was a lack of tacit knowledge. Tacit knowledge is very, very slow to build, and you can either do it by laboriously building that knowledge from scratch, or by learning from someone else who does. Bioweapon programs tend to fail because their organizations neither retain nor effectively share tacit knowledge, and so their hopeful scientific innovations take extremely long and often never materialize. If you can’t solve the problems that your field has already solved, you’re never going to be able to solve new ones.
For a book on why bioweapons programs have historically failed, this section seems like it would be awkwardly useful reading for scientists or even anyone else trying to build communities that can effectively research and solve problems together. Incentives and cross-pollination are crucial, projects with multiple phases should have those phases integrated vertically, tacit knowledge stored in brains is important.
In the second part of the book, Ouagrham-Gormley discusses specific bioweapons programs – American, Soviet, Iraqi, South African, and that of the Aum Shinrikyo cult – and why they failed at one or more of these levels, and why we might expect future programs to go the same way. It’s true that all of these programs failed to yield much in the way of military results, despite enormous expenditures of resources and personnel, and while I haven’t fact checked the section, I’m tempted to buy her conclusions.
Secrecy can be lethal to complicated programs. Because of secrecy constraints:
- Higher-level managers or governments have to put more faith in lower-level managers and their results, letting them steal or redirect resources
- Sites are small and geographically isolated from each other
- Scientists can’t talk about their work with colleagues in other divisions
- Collaboration is limited, especially internationally
- Facilities are more inclined to try to be self-sufficient, leading to extra delays
- Maintaining secrecy is costly
- Destroying research or moving to avoid raids or inspections sets back progress
Authoritarian leadership structures go hand in hand with secrecy, and have similarly dire ramifications:
- Directives aren’t based in scientific plausibility
- Focus on results only means that researchers are incentivized to make up results to avoid harsh punishments
- Supervisors are also incentivized to make up results, which works, because their supervisors don’t understand what they’re doing
- Feedback only goes down the hierarchy, suggestions from staff aren’t passed up
- Working in strict settings is unrewarding and demoralizes staff
- Promotion is based on political favor, not expertise, and reduces quality of research
- Power struggles between staff reduce ability to cooperate
Sometimes cases are more subtle. The US bioweapons program ran from roughly 1943 to 1969, and didn’t totally fall prey to either of these – researchers and staff met at Fort Detrick at different levels and cross-pollinated knowledge with relative freedom. Crucially, it was “secret but legal, as it operated under the signature of the Biological Weapons Convention (BWC). Therefore, it could afford to maintain a certain degree of openness in its dealings with the outside world.”
Its open status was highly unusual. Nonetheless, while it achieved a surprising amount, the US program still failed to produce a working weapon after 27 years. It was closed later when the US ratified the BWC itself.
Ouagrham-Gormley says this failure was mostly due to a lack of collaboration between scientists and the military, shifting infrastructure early on, and diffuse organization. The scientists at Fort Detrick made impressive research progress, including dozens of vaccines, and research tools including decontamination with formaldehyde, negative air pressure in pathogen labs, and the laminar flow fume hood used ubiquitously for biological work in labs across the world.
Used for, among other things, tissue culture. || Public domain by TimVickers.
But research and weaponization are two different things, and military and scientific applications rarely met. The program was never considered a priority by the military. In fact, its leadership (responsibilities and funding decisions) in the government was ambiguously presided over by about a dozen agencies, and it was reorganized and re-funded sporadically depending on what wars were going on at the time. Uncertainty and a lack of coordination ultimately lead the program nowhere. It was amusing to learn that the same issue plaguing biodefense in the US today was also responsible for sinking bioweapons research decades ago.
Ouagrham-Gormley discussed the Japanese Aum Shinrikyo cult’s large bioweapons efforts, but didn’t discuss Japan’s military bioweapon program, Unit 731, which ran from 1932 to 1935 and included testing numerous agents on Chinese civilians, and a variety of attacks on Chinese cities. While the experiments conducted are among the most horrific war crimes known, its war use was mixed – release of bombs containing bubonic-plague infected fleas, as well as other human, livestock, and crop diseases – killed between 200,000 and 600,000. Unless I’m very wrong, this makes that the largest modern bioweapon attack. Further attacks were planned, including on the US, but the program was ended and evidence was destroyed when Japan surrendered in World War II.
I haven’t looked into the case too much, but it’s interesting because that program appears to have had an unusually high death toll (for a bioweapon program). As far as I can tell, some factors were: the program having general government approval and lots of resources, stable leadership, a main location, and its constant testing of weapons on enemy civilians, which added to the death toll – they didn’t wait as long to develop weapons that were perfect, and gathered data on early tests, without much concern for secrecy. This program predated the others, which might have been a factor in its ability to test weapons on civilian populations (even though the program was technically forbidden by the 1925 Germ Warfare provision of the Geneva Conventions).
Ramifications for the future
One interesting takeaway is that covertness has a substantial cost – forcing a program to “go underground” is a huge impediment to progress. This suggests that the Biological Weapons Convention, which has been criticized for being toothless and lacking provisions for enforcement, is actually already doing very useful work – by forcing programs to be covert at all. Of course, Ouagrham-Gormley recommends adding those provisions anyways, as well as checks on signatory nations – like random inspections – that more effectively add to the cost of maintaining secrecy for any potential efforts. I agree.
In fact, it’s working already. Consider:
- In weapons programs, expertise is crucial, both in manufacturing and in the relevant organisms but also bioweapons themselves.
- The Biological Weapons Convention has been active since 1975. The huge Soviet bioweapon program continued secretly, but as shrinking in the late 1980’s, and was officially acknowledged and ended in 1992.
- While the problem hasn’t disappeared since then, new experts in bioweapon creation are very rare.
- People working on bioweapons before 1975 are mostly already retired.
As a result, that tacit knowledge transfer is being cut off. A new state that wanted to pick up bioweapons would have to start from scratch. The entire field has been set back by decades, and for once, that statement is a triumph.
Another takeaway is that the dominant message, from the government and elsewhere, about the perils of bioweapons needs to change. Groups from Japan’s 451 Unit to al-Qaeda have started bioweapon programs because they learned that the enemy was scared that they would. This suggests that the meme “bioweapons are cheap, easy, and dangerous” is actively dangerous for biodefense. Aside from that, as demonstrated by the rest of the book, it’s not true. And because it encourages groups to make bioweapons, we should perhaps stop spreading it.
(Granted, the book also relays an anecdote from Shoko Ashara, the head of the Aum Shinrikyo cult, who after its bioterrorism project failure “speculat[ed] that U.S. assessments of the risk of biological terrorism were designed to mislead terrorist groups into pursuing such weapons.” So maybe there’s something there, but I strongly suspect that such a design was inadvertent and not worth relying on.)
I’m overall fairly convinced by the message of the book, that bioweapons programs are complicated and difficult, that merely getting a hold of a dangerous agent is the least of the problems of a theoretical bioweapons program, and that small actors are unlikely to be able to effectively pull this off now.
I think Ouagrham-Gormley and I disagree most on the dangers of biotechnology. This isn’t discussed much in the book, but when she references it towards the end, she calls it “the so-called biotechnology revolution” and describes the difficulty and hidden years of work that have gone into feats of synthetic biology, like synthesizing poliovirus in 2002.
It makes sense that the early syntheses of viruses, or other microbiological works of magic, would be incredibly difficult and take years of expertise. This is also true for, say, early genome sequencing, taking thousands of hours of hand-aligning individual base pairs. But it turns out being able to sequence genomes is kind of useful, and now…
That biotechnology is becoming more accessible seems true, and the book, for me, throws into a critical light the ability to keep track somehow of accessible it is. Using DIYbio hobbyists as a case study might be valuable, or looking at machines like this “digital-to-biological converter for on-demand production of biologics”.
How low are those tacit knowledge barriers? How low will they be? There are obvious reasons to not necessarily publish all of these results, but somebody ought to keep track.
Ouagrham-Gormley does stress, I think accurately, that getting a hold of a pathogen is a small part of the problem. In the past, I’ve made the argument that biodefense is critical because “the smallpox genome is online and you can just download it” – which, don’t get me wrong, still isn’t reassuring – but that particular example isn’t immediately a global catastrophe. The US and Soviet Russia tried weaponizing smallpox, and it’s not terribly easy. (Imagine that you, you in particular, are evil, and have just been handed a sample of smallpox. What are you going to do with it? …Start some tissue culture?)
(Semi-relatedly, did you know that the US government has enough smallpox vaccine stockpiled for everyone in the country? I didn’t.)
…But maybe this will become less of a barrier in the future, too. Genetic engineering might create pathogens more suited for bioweapons than extant diseases. They might be well-tailored enough not to require dispersal via the clunky, harsh munitions that have stymied past efforts to turn delicate microbes into weapons. Obviously, natural pandemics can happen without those – could human alteration give a pathogen that much advantage over the countless numbers of pathogens randomly churned out of humans and animals daily? We don’t know.
The book states: “In the bioweapons field, unless future technologies can render biomaterials behavior predictable and controllable… the role of expertise and its socio-organizational context will remain critically important barriers to bioweapons development.”
Which seems like the crux – I agree with that statement, but predictable and controllable biomaterials is exactly what synthetic biology is trying to achieve, and we need to pay a lot of attention to how these factors will change in the future. Biosafety needs to be adaptable.
At least, biodefense in the future of cheap DNA synthesis will probably still have a little more going for it than ad campaigns like this.
[Cross-posted to the Global Risk Research Network.]