Griffin & Nesseth ”The science of Orphan Black: the official companion”

I didn’t know that science fiction series Orphan Black actually had a real Cosima: Cosima Herter, science consultant. After reading this interview and finishing season 5, I realised that there is also a new book I needed to read: The science of Orphan Black: The official companion by PhD candidate in development, stem cells and regenerative medicine Casey Griffin and science communicator Nina Nesseth with a foreword by Cosima Hertner.

(Warning: This post contains serious spoilers for Orphan Black, and a conceptual spoiler for GATTACA.)

One thing about science fiction struck me when I was watching the last episodes of Orphan Black: Sometimes it makes a lot more sense if we don’t believe everything the fictional scientists tell us. Like real scientists, they may be wrong, or they may be exaggerating. The genetically segregated future of GATTACA becomes no less chilling when you realise that the silly high predictive accuracies claimed are likely just propaganda from a oppressive society. And as you realise that the dying P.T. Westmorland is an imposter, you can break your suspension of disbelief about LIN28A as a fountain of youth gene … Of course, genetics is a little more complicated than that, and he is just another rich dude who wants science to make him live forever.

However, it wouldn’t be Orphan Black if there weren’t a basis in reality: there are several single gene mutations in model animals (e.g. Kenyon & al 1993) that can make them live a lot longer than normal, and LIN28A is involved in ageing (reviewed by Jun-Hao & al 2016). It’s not out of the question that an engineered single gene disruption that substantially increases longevity in humans could be possible. Not practical, and not necessarily without unpleasant side effects, but not out of the question.

Orphan Black was part slightly scary adventure, part festival of ideas about science and society, part character-driven web of relationships, and part, sadly, bricolage of clichés. I found when watching season five that I’d forgotten most of the plots of seasons two through four, and I will probably never make the effort to sit through them again. The first and last seasons make up for it, though.

The series seems to have been set on squeezing as many different biological concepts as possible in there, so the book has to try to do the same. It has not just clones and transgenes, but also gene therapy, stem cells, prion disease, telomeres, dopamine, ancient DNA, stem cells in cosmetics and so on. Two chapters try valiantly to make sense of the clone disease and the cure. It shows that the authors have encyclopedic knowledge of life science, with a special interest in development and stem cells.

But I think they slightly oversell how accurate the show is. Like when Cosima tells Scott to ”run a PCR on these samples, see if there are any genetic markers” and ”can you sequence for cytochrome c?”, and Scott replies ”the barcode gene? that’s the one we use for species differentiation” … That’s what screen science is like. The right words, but not always in the right order.

Cosima and Scott sciencing at university, before everything went pear-shaped. One of the good thing about Orphan Black was the scientist characters. There was a ton of them! The good ones, geniuses with sparse resources and self experimentation, the evil ones, well funded and deeply unethical, and Delphine. This scene is an exception in that it plays the cringe-inducing nerd angle. Cosima and Scott grew after than this.

There are some scientific oddities. They must be impossible to avoid. For example, the section on epigenetics treats it as a completely new field, sort of missing the history of the subfield. DNA methylation research was going on already in the 1970s (Gitschier 2009). Genomic imprinting, arguably the only solid example of transgenerational epigenetic effects in humans, and X inactivation were both being discovered during 70s and 80s (reviewed by Ferguson-Smith 2011). The book also makes a hash of genome sequencing, which is a shame but understandable. It would have taken a lot of effort to disentangle how sequencing worked when the fictional clone experiment started and how it got to how it works in season five, when Cosima runs Nanopore sequencing.

The idea of human cloning is evocative. Orphan Black flipped it on its head by making the main clone characters strikingly different. It also cleverly acknowledged that human cloning is a somewhat dated 20th century idea, and that the cutting edge of life science has moved on. But I wish the book had been harder on the premise of the clone experiment:

By cloning the human genome and fostering a set of experimental subjects from birth, the scientists behind the project would gain many insights into the inner workings of the human body, from the relay of genetic code into observable traits (called phenotypes), to the viability of manipulated DNA as a potential therapeutic tool, to the effects of environmental factors on genetics. It’s a scientifically beautiful setup to learn myriad things about ourselves as humans, and the doctors at Dyad were quick to jump at that opportunity. (Chapter 1)

This is the very problem. Of course, sometimes ethically atrocious fictional science would, in principle, generate useful knowledge. But when when fictional science is near useless, let’s not pretend that it would produce a lot of valuable knowledge. When it comes to genetics and complex traits like human health, small sample studies of this kind (even if it was using clones) would be utterly useless. Worse than useless, they would likely be biased and misleading.

Researchers still float the idea of a ”baseline”, though, but in the form of a cell line, where it makes more sense. See the the (Human) Genome Project-write (Boeke & al 2016), suggesting the construction of an ideal baseline cell line for understanding human genome function:

Additional pilot projects being considered include … developing a homozygous reference genome bearing the most common pan-human allele (or allele ancestral to a given human population) at each position to develop cells powered by ”baseline” human genomes. Comparison with this baseline will aid in dissecting complex phenotypes, such as disease susceptibility.

In the end, the most important part of science in science fiction isn’t to be a factually correct, nor to be a coherent prediction about the future. If Orphan Black has raised interest in science, and I’m sure it has, that is great. And if it has stimulated discussions about the relationship between biological science, culture and ethics, that is even better.

The timeline of when relevant scientific discoveries happened in the real world and in Orphan Black is great. The book has a partial bibliography. The ”Clone Club Q&A” boxes range from silly fun to great open questions.

Orphan Black was probably the best genetics TV show around, and this book is a wonderful companion piece.

Plaque at the Roslin Institute to the sheep that haunts Orphan Black. ”Baa.”

Literature

Boeke, JD et al (2016) The genome project-write. Science.

Ferguson-Smith, AC (2011) Genomic imprinting: the emergence of an epigenetic paradigm. Nature reviews Genetics.

Gitschier, J. (2009). On the track of DNA methylation: An interview with Adrian Bird. PLOS Genetics.

Jun-Hao, E. T., Gupta, R. R., & Shyh-Chang, N. (2016). Lin28 and let-7 in the Metabolic Physiology of Aging. Trends in Endocrinology & Metabolism.

Kenyon, C., Chang, J., Gensch, E., Rudner, A., & Tabtiang, R. (1993). A C. elegans mutant that lives twice as long as wild type. Nature, 366(6454), 461-464.

Annonser

European Society for Evolutionary Biology congress, Groningen, 2017

The European Society for Evolutionary Biology meeting this year took place August 20–25 in Groningen, Netherlands. As usual, the meeting was great, with lots of good talks and posters. I was also happy to meet colleagues, including people from Linköping who I’ve missed a lot since moving.

Here are some of my subjective highlights:

There were several interesting talks in the recombination symposium, spanning from theory to molecular biology and from within-population variation to phylogenetic distances. For example: Irene Tiemann-Boege talked about recombination hotspot evolution from the molecular perspective with mutation bias and GC-biased gene conversion (Arbeithuber & al 2015), while Franciso Úbeda de Torres presented a population genetic model model of recombination hotspots. I would need to pore over the paper to understand what was going on and if the model solves the hotspot paradox (as the title said), and how it is different from his previous model (Úbeda & Wilkins 2011).

There were also talks about young sex chromosomes. Alison Wright talked about recombination suppression on the evolving guppy sex chromosomes (Wright & al 2017), and Bengt Hansson about the autosome–sex chromosome fusion in Sylvioidea birds (Pala & al 2012).

Piter Bijma gave two (!) talks on social genetic effects. That is when your trait value depends not just on your genotype, but on the genotype on others around you, a situation that is probably not at all uncommon. After all, animals often live in groups, and plants have to stay put where they are. One can model this, which leads to a slightly whacky quantitative genetics where heritable variance can be greater than the trait variance, and where the individual and social effects can cancel each other out and prevent response to selection.

I first heard about this at ICQG in Edinburgh a few years ago (if memory serves, it was Bruce Walsh presenting Bijma’s slides?), but have only made a couple of fairly idle and unsuccessful attempts to understand it since. I got the feeling that social genetic effects should have some bearing on debates about kin selection versus multilevel selection, but I’m not sure how it all fits together. It is nice that it comes with a way to estimate effects (given that we know which individuals are in groups together and their relatedness), and there are some compelling case studies (Wade & al 2010). On the other hand, separating social genetic effects from other social effects must be tricky; for example, early social environment effects can look like indirect genetic effects (Canario, Lundeheim & Bijma 2017).

Philipp Gienapp talked about using realised relatedness (i.e. genomic relationships a.k.a. throw all the markers into the model and let partial pooling sort them out) to estimate quantitative genetic parameters in the wild. There is a lot of relevant information in the animal breeding and human genetics literature, but applying these things in the wild comes with challenges that deserves some new research to sort things out. Evolutionary genetics, similar to human genetics, is more interested in parameter estimation than prediction of phenotypes or breeding values. On the other hand, human genetics methods often work on GWAS summary statistics. In this way, evolutionary genetics is probably more similar to breeding. Also, the relatedness structure of the the populations may matter. Evolution happens in all kinds of populations, large and small, structured and well-mixed. Therefore, evolutionary geneticists may work with populations that are different from those in breeding and human genetics.

For example, someone asked about estimating genetic correlations with genomic relationships. There are certainly animal breeding and human genetics papers about realised relatedness and genetic correlation (Jia & Jannik 2012, Visscher & al 2014 etc), because of course, breeders need to deal a lot with correlated traits and human geneticists really like finding genetic correlations between different GWAS traits.

Speaking of population structure, Fst scans are still all the rage. There was a lot of discussion about trying to find regions of the genome that stand out as more differentiated in closely related populations (”genomic islands of speciation/divergence/differentiation”), and as less differentiated in mostly separated populations (introgression, possibly adaptive). But it’s not just Fst outliers. It’s encouraging to see different kinds of quantitative and population genomic methods applied in the same systems. On the hybrid and introgression side of things, Leslie Turner (Turner & Harr 2014) and Jun Kitano (Ravinet & al 2017) gave interesting talks on mice and sticklebacks, respectively. Danièle Filiaut showed an super impressive integrative GWAS and selection mapping study of local adaptation in Swedish Arabidopsis thaliana (Kedaffrec & al 2016).

Susan Johnston spoke about recombination mapping in Soay sheep and Rum deer (Johnston & al 2016, 2017). Given how few large long term genetic studies like this there are, it’s marvelous to be see the same kind of analysis in two parallel systems. Jason Munshi-South gave what seemed like a fascinating talk about rodent evolution in New York City (Harris & Munshi-South 2017). Unfortunately, too many other people thought so too, and I mostly failed to eavesdrop form the corridor.

Finally, Nina Wedell gave a wonderful presidential address about Evolution in the 21th century. ”Because I can. I’m the president now.” Yes!

The talk was about threats to evolutionary biology, examples of it’s usefulness and a series of calls to action. I liked the part about celebrating science much more than the common call to explain science to people. You know, like you hear at seminars and the march for science: We need to ”get out there” (where?) and ”explain what we’re doing” (to whom?). Because if it is true that science and scientists are being questioned, then scientists should speak in a way that works even if they’re not starting by default from a position of authority. Scientists need not just explain the science, but justify why the science is worth listening to in the first place.

”As your current president, I encourage you to celebrate evolution!”

I think this is precisely right, and it made me so happy. Of course, it leaves questions like ”What does that mean?”, ”How do we do it?”, but as a two word slogan, I think it is perfect.

Celebration aligns with sound rhetorical strategy in two ways. First, explanation is fine when someone asks for it, or is otherwise already disposed to listen to an explanation. But otherwise, it is more important to awake interest and a positive state of mind before laying out the facts. (I can’t claim to be any kind of rhetorics expert. But see Rhetoric: for Herennius, Book I, V-VII for ancient wisdom on the topic.) By the way, I’m sure this is what people who are good at science communication actually do. Second, celebration means concentrating on the excitement and wonder, and the good things science can do. In that way, it prevents the trap of listing all the bad things that will happen if Trumpists, creationists and anti-vaccine activists get their way.

Nina Wedell also gave examples of the usefulness of evolution: biomimicry, directed evolution of enzymes, the power of evolutionary algorithms, plant and animal breeding, and prevention of resistance to herbicides and antibiotics. These are all good, worthy things, but also quite a limited subset of evolutionary biology? Maybe this idea is that evolutionary biology should be a basic science supporting applications like these. In line with that, she brought up how serendipitous useful things can come from studying strange diverse organisms and figuring out how they do things. The example in talk was the CRISPR–Cas system. Similar stories apply to a other proteins used as biomedical and biotechnology tools, such as Taq polymerase and Green fluorescent protein.

I have to question a remark about reproducibility, though. The list of threats included ”critique of the scientific method” and concerns over reproducibility, as if this was something that came from outside of science. I may have misunderstood. It was a very brief comment. But if problems with reproducibility are a threat to science, and I think they can be, then it’s not just a problem of image but a problem with how scientists perform, analyse, and report their science.

Evolutionary biology hasn’t been in the reproducibility crisis news the same way as psychology or behavioural genetics, but I don’t know if that is because of better quality, or just that no one has looked that carefully for the problems. There are certainly contradictory results here too, and the same overly flexible data analysis and selective reporting practices that cause problems elsewhere must be common in evolution too. I can think of some reasons why evolutionary biology may be better off. Parts of the field default to analysing data with multilevel or mixed models. Mixed models are not perfect, but they help with some multiple testing problems by fitting and partially pooling a lot of coefficients in the same model. Also, studies that use classical model organisms may be able to get a lot of replication, low variance, and large sample sizes in a way that is impossible for example with human experiments.

So I don’t know if there is a desperate need for large initiatives for replication of key results, preregistration of studies, and improvement of data analysis practice in evolution; there may or there may not. But wouldn’t it still be wonderful if we had them?

Bingo! I don’t have a ton of photos from Groningen, but here is my conference bingo card. Note what conspicuously isn’t filled in: the poster sessions took place in nice big room, and were not that loud. In retrospect, I probably didn’t go to enough of the non-genetic inheritance talks, and I should’ve put Fisher 1930 instead of 1918.

Peerage of science, first impressions

After I wrote a post about reviewing papers, Craig Primmer suggested on Twitter that I look into Peerage of Science. Peerage of Science is a portal and community for peer review. It has a lot of good ideas. It decouples reviewing from journal submission, but it is still made for papers aimed to be published in a conventional journal. It collects reviewers and manuscripts from a different fields in one place, allows interested reviewers to select papers they want to review, and provides anonymity (if the authors want it). I once wrote a few sentences about what I thought ”optimal peer review” would be like, for a PLOS early career researchers’ travel grant. (I did not get the grant.) My ideas for better peer review were probably not that bright, or that realistic, but they did share several features with the Peerage of Science model. Naturally, I was interested.

I’ve tried reviewing for Peerage of Science for a couple of months. My first impression is that it seems to work really well. The benefits are quite obvious: I’ve seen some of the papers get more reviews than they would typically get at a journal, and the reviews usually seem no less elaborate. The structured form for reviewing is helpful, and corresponds well with what I think a good review should be like. I think I’ll stick around, look out for the notifications, and jump in when a paper is close to my interests. I really hope enough people will use Peerage of Science for it to be successful.

There are also downsides to this model:

There seems to be an uneven allocation of reviewer effort. Some papers have a lot of reviewers, but some have only one. Of course, only the people at Peerage of Science know the actual distribution of reviews. Maybe one reviewer processes are actually very rare! This is a bit like post-publication review, except that there, you can at least know who else has already commented on a paper. I know some people think that this is a good thing. Papers that attract interest also attract scrutiny, and thus reviewer effort is directed towards where it is most needed. But I think that in the ideal case, every paper would be reviewed thoroughly. This could be helped by an indicator of how many other reviewers have engaged, or at least already posted their essays.

There is also the frustration of coming late to a process where one feels the reviewers have done a poor job. This was my first experience. I joined a review process that was at its last stages, and found a short, rather sloppy review that missed most of what I thought were the important points, and belaboured what I thought was a non-issue. Too late did I realize that I could do nothing about it.

Who reviews the reviewers? The reviewers do. I see the appeal of scoring and weighting reviews. It certainly makes reviewing more of a learning experience, which must be a good thing. But I feel rather confused about what I am supposed to write as reviewer feedback. Evidently, I’m not alone, because people seem to put rather different things in the feedback box.

Since the Peerage of Science team have designed the whole format and platform, I assume that every part of the process is thought through. The feedback forms, the prompts that are shown with each step, the point at which different pieces of information is revealed to you — this is part of a vision of better peer review. But sometimes, that vision doesn’t fully make sense to me. For example, if the authors want to sign their manuscripts, Peerage of Science has the following ominous note for them:

Peerage of Science encourages Authors to remain anonymous during the review process, to ensure unbiased peer review and publishing decisions. Reviewers usually expect this too, and may perceive signed submissions as attempts to influence their evaluation and respond accordingly.

Also, I’d really really really love to be able to turn down the frequency of email notifications. In the last four days, I’ve gotten more than one email a day about review processes I’m involved in, even if I can’t do anything more until after the next deadline.

Reviewing, postscript

Later the same day as the post on reviewing was published, I saw the paper by Kovanis and coworkers on the burden of peer review in biomedical literature. It’s silly of me that it didn’t occur to me to look for data on how many papers researchers review. Their first figure shows data on the number of reviews performed 2015 by Publons users:

kovanis_reviewers_figure

Figure 1B from Kovanis & al (2016) PLOS ONE (cc:by 4.0).

If we take these numbers at face value (but we probably shouldn’t, because Publons users seem likely to be a bised sample of researchers), my 4-6 reviews in a year fall somewhere in the middle: on the one hand, more than half of the researchers review fewer papers, but it’s a lot less than those who review the most.

This paper estimates the supply and demand of reviews in biomedical literature. The conclusion is lot like the above graph: reviewer effort is unevenly distributed. In their discussion, the authors write:

Besides, some researchers may be willing to contribute but are never invited. An automated method to improve the matching between submitted articles and the most appropriate candidate peer reviewers may be valuable to the scientific publication system. Such a system could track the number of reviews performed by each author to avoid overburdening them.

This seems right to me. There may be free riders who refuse to pull their weight. But there are probably a lot more of people like me, who could and would review more if they were asked to. A way for editors to find them (us) more easily would probably be a good thing.