Griffin & Nesseth ”The science of Orphan Black: the official companion”

I didn’t know that science fiction series Orphan Black actually had a real Cosima: Cosima Herter, science consultant. After reading this interview and finishing season 5, I realised that there is also a new book I needed to read: The science of Orphan Black: The official companion by PhD candidate in development, stem cells and regenerative medicine Casey Griffin and science communicator Nina Nesseth with a foreword by Cosima Hertner.

(Warning: This post contains serious spoilers for Orphan Black, and a conceptual spoiler for GATTACA.)

One thing about science fiction struck me when I was watching the last episodes of Orphan Black: Sometimes it makes a lot more sense if we don’t believe everything the fictional scientists tell us. Like real scientists, they may be wrong, or they may be exaggerating. The genetically segregated future of GATTACA becomes no less chilling when you realise that the silly high predictive accuracies claimed are likely just propaganda from a oppressive society. And as you realise that the dying P.T. Westmorland is an imposter, you can break your suspension of disbelief about LIN28A as a fountain of youth gene … Of course, genetics is a little more complicated than that, and he is just another rich dude who wants science to make him live forever.

However, it wouldn’t be Orphan Black if there weren’t a basis in reality: there are several single gene mutations in model animals (e.g. Kenyon & al 1993) that can make them live a lot longer than normal, and LIN28A is involved in ageing (reviewed by Jun-Hao & al 2016). It’s not out of the question that an engineered single gene disruption that substantially increases longevity in humans could be possible. Not practical, and not necessarily without unpleasant side effects, but not out of the question.

Orphan Black was part slightly scary adventure, part festival of ideas about science and society, part character-driven web of relationships, and part, sadly, bricolage of clichés. I found when watching season five that I’d forgotten most of the plots of seasons two through four, and I will probably never make the effort to sit through them again. The first and last seasons make up for it, though.

The series seems to have been set on squeezing as many different biological concepts as possible in there, so the book has to try to do the same. It has not just clones and transgenes, but also gene therapy, stem cells, prion disease, telomeres, dopamine, ancient DNA, stem cells in cosmetics and so on. Two chapters try valiantly to make sense of the clone disease and the cure. It shows that the authors have encyclopedic knowledge of life science, with a special interest in development and stem cells.

But I think they slightly oversell how accurate the show is. Like when Cosima tells Scott to ”run a PCR on these samples, see if there are any genetic markers” and ”can you sequence for cytochrome c?”, and Scott replies ”the barcode gene? that’s the one we use for species differentiation” … That’s what screen science is like. The right words, but not always in the right order.

Cosima and Scott sciencing at university, before everything went pear-shaped. One of the good thing about Orphan Black was the scientist characters. There was a ton of them! The good ones, geniuses with sparse resources and self experimentation, the evil ones, well funded and deeply unethical, and Delphine. This scene is an exception in that it plays the cringe-inducing nerd angle. Cosima and Scott grew after than this.

There are some scientific oddities. They must be impossible to avoid. For example, the section on epigenetics treats it as a completely new field, sort of missing the history of the subfield. DNA methylation research was going on already in the 1970s (Gitschier 2009). Genomic imprinting, arguably the only solid example of transgenerational epigenetic effects in humans, and X inactivation were both being discovered during 70s and 80s (reviewed by Ferguson-Smith 2011). The book also makes a hash of genome sequencing, which is a shame but understandable. It would have taken a lot of effort to disentangle how sequencing worked when the fictional clone experiment started and how it got to how it works in season five, when Cosima runs Nanopore sequencing.

The idea of human cloning is evocative. Orphan Black flipped it on its head by making the main clone characters strikingly different. It also cleverly acknowledged that human cloning is a somewhat dated 20th century idea, and that the cutting edge of life science has moved on. But I wish the book had been harder on the premise of the clone experiment:

By cloning the human genome and fostering a set of experimental subjects from birth, the scientists behind the project would gain many insights into the inner workings of the human body, from the relay of genetic code into observable traits (called phenotypes), to the viability of manipulated DNA as a potential therapeutic tool, to the effects of environmental factors on genetics. It’s a scientifically beautiful setup to learn myriad things about ourselves as humans, and the doctors at Dyad were quick to jump at that opportunity. (Chapter 1)

This is the very problem. Of course, sometimes ethically atrocious fictional science would, in principle, generate useful knowledge. But when when fictional science is near useless, let’s not pretend that it would produce a lot of valuable knowledge. When it comes to genetics and complex traits like human health, small sample studies of this kind (even if it was using clones) would be utterly useless. Worse than useless, they would likely be biased and misleading.

Researchers still float the idea of a ”baseline”, though, but in the form of a cell line, where it makes more sense. See the the (Human) Genome Project-write (Boeke & al 2016), suggesting the construction of an ideal baseline cell line for understanding human genome function:

Additional pilot projects being considered include … developing a homozygous reference genome bearing the most common pan-human allele (or allele ancestral to a given human population) at each position to develop cells powered by ”baseline” human genomes. Comparison with this baseline will aid in dissecting complex phenotypes, such as disease susceptibility.

In the end, the most important part of science in science fiction isn’t to be a factually correct, nor to be a coherent prediction about the future. If Orphan Black has raised interest in science, and I’m sure it has, that is great. And if it has stimulated discussions about the relationship between biological science, culture and ethics, that is even better.

The timeline of when relevant scientific discoveries happened in the real world and in Orphan Black is great. The book has a partial bibliography. The ”Clone Club Q&A” boxes range from silly fun to great open questions.

Orphan Black was probably the best genetics TV show around, and this book is a wonderful companion piece.

Plaque at the Roslin Institute to the sheep that haunts Orphan Black. ”Baa.”

Literature

Boeke, JD et al (2016) The genome project-write. Science.

Ferguson-Smith, AC (2011) Genomic imprinting: the emergence of an epigenetic paradigm. Nature reviews Genetics.

Gitschier, J. (2009). On the track of DNA methylation: An interview with Adrian Bird. PLOS Genetics.

Jun-Hao, E. T., Gupta, R. R., & Shyh-Chang, N. (2016). Lin28 and let-7 in the Metabolic Physiology of Aging. Trends in Endocrinology & Metabolism.

Kenyon, C., Chang, J., Gensch, E., Rudner, A., & Tabtiang, R. (1993). A C. elegans mutant that lives twice as long as wild type. Nature, 366(6454), 461-464.

Annonser

European Society for Evolutionary Biology congress, Groningen, 2017

The European Society for Evolutionary Biology meeting this year took place August 20–25 in Groningen, Netherlands. As usual, the meeting was great, with lots of good talks and posters. I was also happy to meet colleagues, including people from Linköping who I’ve missed a lot since moving.

Here are some of my subjective highlights:

There were several interesting talks in the recombination symposium, spanning from theory to molecular biology and from within-population variation to phylogenetic distances. For example: Irene Tiemann-Boege talked about recombination hotspot evolution from the molecular perspective with mutation bias and GC-biased gene conversion (Arbeithuber & al 2015), while Franciso Úbeda de Torres presented a population genetic model model of recombination hotspots. I would need to pore over the paper to understand what was going on and if the model solves the hotspot paradox (as the title said), and how it is different from his previous model (Úbeda & Wilkins 2011).

There were also talks about young sex chromosomes. Alison Wright talked about recombination suppression on the evolving guppy sex chromosomes (Wright & al 2017), and Bengt Hansson about the autosome–sex chromosome fusion in Sylvioidea birds (Pala & al 2012).

Piter Bijma gave two (!) talks on social genetic effects. That is when your trait value depends not just on your genotype, but on the genotype on others around you, a situation that is probably not at all uncommon. After all, animals often live in groups, and plants have to stay put where they are. One can model this, which leads to a slightly whacky quantitative genetics where heritable variance can be greater than the trait variance, and where the individual and social effects can cancel each other out and prevent response to selection.

I first heard about this at ICQG in Edinburgh a few years ago (if memory serves, it was Bruce Walsh presenting Bijma’s slides?), but have only made a couple of fairly idle and unsuccessful attempts to understand it since. I got the feeling that social genetic effects should have some bearing on debates about kin selection versus multilevel selection, but I’m not sure how it all fits together. It is nice that it comes with a way to estimate effects (given that we know which individuals are in groups together and their relatedness), and there are some compelling case studies (Wade & al 2010). On the other hand, separating social genetic effects from other social effects must be tricky; for example, early social environment effects can look like indirect genetic effects (Canario, Lundeheim & Bijma 2017).

Philipp Gienapp talked about using realised relatedness (i.e. genomic relationships a.k.a. throw all the markers into the model and let partial pooling sort them out) to estimate quantitative genetic parameters in the wild. There is a lot of relevant information in the animal breeding and human genetics literature, but applying these things in the wild comes with challenges that deserves some new research to sort things out. Evolutionary genetics, similar to human genetics, is more interested in parameter estimation than prediction of phenotypes or breeding values. On the other hand, human genetics methods often work on GWAS summary statistics. In this way, evolutionary genetics is probably more similar to breeding. Also, the relatedness structure of the the populations may matter. Evolution happens in all kinds of populations, large and small, structured and well-mixed. Therefore, evolutionary geneticists may work with populations that are different from those in breeding and human genetics.

For example, someone asked about estimating genetic correlations with genomic relationships. There are certainly animal breeding and human genetics papers about realised relatedness and genetic correlation (Jia & Jannik 2012, Visscher & al 2014 etc), because of course, breeders need to deal a lot with correlated traits and human geneticists really like finding genetic correlations between different GWAS traits.

Speaking of population structure, Fst scans are still all the rage. There was a lot of discussion about trying to find regions of the genome that stand out as more differentiated in closely related populations (”genomic islands of speciation/divergence/differentiation”), and as less differentiated in mostly separated populations (introgression, possibly adaptive). But it’s not just Fst outliers. It’s encouraging to see different kinds of quantitative and population genomic methods applied in the same systems. On the hybrid and introgression side of things, Leslie Turner (Turner & Harr 2014) and Jun Kitano (Ravinet & al 2017) gave interesting talks on mice and sticklebacks, respectively. Danièle Filiaut showed an super impressive integrative GWAS and selection mapping study of local adaptation in Swedish Arabidopsis thaliana (Kedaffrec & al 2016).

Susan Johnston spoke about recombination mapping in Soay sheep and Rum deer (Johnston & al 2016, 2017). Given how few large long term genetic studies like this there are, it’s marvelous to be see the same kind of analysis in two parallel systems. Jason Munshi-South gave what seemed like a fascinating talk about rodent evolution in New York City (Harris & Munshi-South 2017). Unfortunately, too many other people thought so too, and I mostly failed to eavesdrop form the corridor.

Finally, Nina Wedell gave a wonderful presidential address about Evolution in the 21th century. ”Because I can. I’m the president now.” Yes!

The talk was about threats to evolutionary biology, examples of it’s usefulness and a series of calls to action. I liked the part about celebrating science much more than the common call to explain science to people. You know, like you hear at seminars and the march for science: We need to ”get out there” (where?) and ”explain what we’re doing” (to whom?). Because if it is true that science and scientists are being questioned, then scientists should speak in a way that works even if they’re not starting by default from a position of authority. Scientists need not just explain the science, but justify why the science is worth listening to in the first place.

”As your current president, I encourage you to celebrate evolution!”

I think this is precisely right, and it made me so happy. Of course, it leaves questions like ”What does that mean?”, ”How do we do it?”, but as a two word slogan, I think it is perfect.

Celebration aligns with sound rhetorical strategy in two ways. First, explanation is fine when someone asks for it, or is otherwise already disposed to listen to an explanation. But otherwise, it is more important to awake interest and a positive state of mind before laying out the facts. (I can’t claim to be any kind of rhetorics expert. But see Rhetoric: for Herennius, Book I, V-VII for ancient wisdom on the topic.) By the way, I’m sure this is what people who are good at science communication actually do. Second, celebration means concentrating on the excitement and wonder, and the good things science can do. In that way, it prevents the trap of listing all the bad things that will happen if Trumpists, creationists and anti-vaccine activists get their way.

Nina Wedell also gave examples of the usefulness of evolution: biomimicry, directed evolution of enzymes, the power of evolutionary algorithms, plant and animal breeding, and prevention of resistance to herbicides and antibiotics. These are all good, worthy things, but also quite a limited subset of evolutionary biology? Maybe this idea is that evolutionary biology should be a basic science supporting applications like these. In line with that, she brought up how serendipitous useful things can come from studying strange diverse organisms and figuring out how they do things. The example in talk was the CRISPR–Cas system. Similar stories apply to a other proteins used as biomedical and biotechnology tools, such as Taq polymerase and Green fluorescent protein.

I have to question a remark about reproducibility, though. The list of threats included ”critique of the scientific method” and concerns over reproducibility, as if this was something that came from outside of science. I may have misunderstood. It was a very brief comment. But if problems with reproducibility are a threat to science, and I think they can be, then it’s not just a problem of image but a problem with how scientists perform, analyse, and report their science.

Evolutionary biology hasn’t been in the reproducibility crisis news the same way as psychology or behavioural genetics, but I don’t know if that is because of better quality, or just that no one has looked that carefully for the problems. There are certainly contradictory results here too, and the same overly flexible data analysis and selective reporting practices that cause problems elsewhere must be common in evolution too. I can think of some reasons why evolutionary biology may be better off. Parts of the field default to analysing data with multilevel or mixed models. Mixed models are not perfect, but they help with some multiple testing problems by fitting and partially pooling a lot of coefficients in the same model. Also, studies that use classical model organisms may be able to get a lot of replication, low variance, and large sample sizes in a way that is impossible for example with human experiments.

So I don’t know if there is a desperate need for large initiatives for replication of key results, preregistration of studies, and improvement of data analysis practice in evolution; there may or there may not. But wouldn’t it still be wonderful if we had them?

Bingo! I don’t have a ton of photos from Groningen, but here is my conference bingo card. Note what conspicuously isn’t filled in: the poster sessions took place in nice big room, and were not that loud. In retrospect, I probably didn’t go to enough of the non-genetic inheritance talks, and I should’ve put Fisher 1930 instead of 1918.

”These are all fairly obvious” (says Sewall Wright)

I was checking a quote from Sewall Wright, and it turned out that the whole passage was delightful. Here it is, from volume 1 of Genetics and the Evolution of Populations (pages 59-60):

There are a number of broad generalizations that follow from this netlike relationship between genome and complex characters. These are all fairly obvious but it may be well to state them explicitly.

1) The variations of most characters are affected by a great many loci (the multiple factor hypothesis).

2) In general, each gene replacement has effects on many characters (the principle of universal pleiotropy).

3) Each of the innumerable possible alleles at any locus has a unique array of differential effects on taking account of pleiotropy (uniqueness of alleles).

4) The dominance relation of two alleles is not an attribute of them but of the whole genome and of the environment. Dominance may differ for each pleiotropic effect and is in general easily modifiable (relativity of dominance).

5) The effects of multiple loci on a character in general involve much nonadditive interaction (universality of interaction effects).

6) Both ontogenetic and phylogenetic homology depend on calling into play similar chains of gene-controlled reactions under similar developmental conditions (homology).

7) The contributions of measurable characters to overall selective value usually involve interaction effects of the most extreme sort because of the usually intermediate position of the optimum grade, a situation that implies the existence of innumerable different selective peaks (multiple selective peaks).

What can we say about this?

It seems point one is true. People may argue about whether the variants behind complex traits are many, relatively common, with tiny individual effects or many, relatively rare, and with larger effects that average out to tiny effects when measured in the whole population. In any case, there are many causative variants, alright.

Point two — now also known as the omnigenetic model — hinges on how you read ”in general”, I guess. In some sense, universal pleiotropy follows from genome crowding. If there are enough causative variants and a limited number of genes, eventually every gene will be associated with every trait.

I don’t think that point three is true. I would assume that many loss of function mutations to protein coding genes, for example, would be interchangeable.

I don’t really understand points six and seven, about homology and fitness landscapes, that well. The later section about homology reads to me as if it could be part of a debate going on at the time. Number seven describes Wright’s view of natural selection as a kind of fitness whack-a-mole, where if a genotype is fit in one dimension, it probably loses in some other. The hypothesis and the metaphor have been extremely influential — I think largely because many people thought that it was wrong in many different ways.

Points four and five are related and, I imagine, the most controversial of the list. Why does Wright say that there is universal epistasis? Because of physiological genetics. Or, in modern parlance, maybe because of gene networks and systems biology. On page 71, he puts it like this:

Interaction effects necessarily occur with respect to the ultimate products of chains of metabolic processes in which each step is controlled by a different locus. This carries with it the implication that interaction effects are universal in the more complex characters that trace such processes.

The argument seems to persists to this day, and I think it is true. On the other hand, there is the question how much this matters to the variants that actually segregate in a given population and affect a given trait.

This is often framed as a question of variance. It turns out that even with epistatic gene action, in many cases, most of the genetic variance is still additive (Mäki-Tanila & Hill 2014, Huang & Mackay 2014). But something similar must apply to the effects that you will see from a locus. They also depend on the allele frequencies at other loci. An interaction does nothing when one of the interaction partners are fixed. If they are nearly to fixed, it will do nearly nothing. If they’re all at intermediate frequency, things become more interesting.

Wright’s principle of universal interaction is also grounded in his empirical work. A lot of space in this book is devoted to results from pigmentation genetics in guinea pigs, which includes lots of dominance and interaction. It could be that Wright was too quick to generalise from guinea pig coat colours to other traits. It could be that working in a system consisting of inbred lines draws your attention to nonlinearities that are rare and marginal in the source populations. On the other hand, it’s in these systems we can get a good handle on the dominance and interaction that may be missed elsewhere.

Study of effects in combination indicates a complicated network of interacting processes with numerous pleiotropic effects. There is no reason to suppose that a similar analysis of any character as complicated as melanin pigmentation would reveal a simpler genetic system. The inadequacy of any evolutionary theory that treats genes as if they had constant effects, favourable or unfavourable, irrespective of the rest of the genome, seems clear. (p. 88)

I’m not that well versed in pigmentation genetics, but I hope that someone is working on this. In an era where we can identify the molecular basis of classical genetic variants, I hope that someone keeps track of all these A, C, P, Q etc, and to what extent they’ve been mapped.

Literature

Wright, Sewall. ”Genetics and the Evolution of Populations” Volume 1 (1968).

Mäki-Tanila, Asko, and William G. Hill. ”Influence of gene interaction on complex trait variation with multilocus models.” Genetics 198.1 (2014): 355-367.

Huang, Wen, and Trudy FC Mackay. ”The genetic architecture of quantitative traits cannot be inferred from variance component analysis.” PLoS genetics 12.11 (2016): e1006421.

20170705_183042.jpg

Yours truly outside the library on Thomas Bayes’ road, incredibly happy with having found the book.

See you at #eseb2017

I’m going to Groningen for the European Society for Evolutionary Biology meeting on the 20th to 25th of August.

Given what I’m currently working on, I’m especially excited about the symposium on applications of evolutionary biology in agriculture and industry and also the sprawling three-day genomics of adaptation symposium, but I assume that there will be an abundance of interesting talks and posters all over the place.

If you are there, say hello!

Scripting for data analysis (with R)

Course materials (GitHub)

This was a PhD course given in the spring of 2017 at Linköping University. The course was organised by the graduate school Forum scientium and was aimed at people who might be interested in using R for data analysis. The materials developed from a part of a previous PhD course from a couple of years ago, an R tutorial given as part of the Behaviour genetics Masters course, and the Wright lab computation lunches.

Around twenty people attended the seminars, and a couple of handfuls of people completed the homeworks. I don’t know how much one should read into the course evaluation form, but the feedback was mostly positive. Some people had previous exposure to R, and did the first homework in an hour. Others had never programmed in any language, and had a hard time getting started.

There is certainly scope for improvement. For example, some of the packages used could be substituted for more contemporary tools. One could say that the course is slouching towards the tidyverse. But I worry a bit about making the participants feel too boxed in. I don’t want them to feel that they’re taught a way that will solve some anticipated type of problems very neatly, but that may not generalize. Once I’ve made the switch to dplyr and tidyr (and maybe even purr … though I hesitate) fully myself, I would probably use them in teaching too. Another nice plus would be to be able to use R for data science as course literature. The readings now are scattered; maybe a monolithic book would be good.

I’ve tried, in every iteration, to emphasize the importance of writing scripts, even when working interactively with R. I still think I need to emphasize it even more. There is also a kind of ”do as I say, not as I do” issue, since in the seminars, I demo some things by just typing them into the console. I’ll force myself to write them into a script instead.

Possible alternate flavours for the course include: A longer version expanding on the same topics. I don’t think one should cram more contents in. I’d like to have actual projects where the participants can analyze, visualize and present data and simulations.

This is the course plan we sent out:

1. A crash course in R

Why do data analysis with a scripting language
The RStudio interface
Using R as a calculator
Working interactively and writing code
Getting help
Reading and looking at data
Installing useful packages
A first graph with ggplot2

Homework for next time: The Unicorn Dataset, exercises in reading data, descriptive statistics, linear models and a few statistical graphs.

2. Programming for data analysis

Programming languages one may encounter in science
Common concepts and code examples
Data structures in R
Vectors
Data frames
Functions
Control flow

Homework for next time: The Unicorn Expression Dataset, exercises in data wrangling and more interesting graphs.

3. Working with moderately large data

Exercise followup
More about functions
Lists
Objects
Functional and imperative programming
Doing things many times, loops and plyr
Simulating data
Working on a cluster

Final homework: Design analysis by simulation: pick a data analysis project that you care about; simulate data based on a model and reasonable effect size; implement the data analysis; and apply it to simulated data with and without effects to estimate power and other design characteristics. This ties together skills from all seminars.

Summer of data science 1: Genomic prediction machines #SoDS17

Genetics is a data science, right?

One of my Summer of data science learning points was to play with out of the box prediction tools. So let’s try out a few genomic prediction methods. The code is on GitHub, and the simulated data are on Figshare.

Genomic selection is the happy melding of quantitative and molecular genetics. It means using genetic markers en masse to predict traits and and make breeding decisions. It can give you better accuracy in choosing the right plants or animals to pair, and it can allow you to take shortcuts by DNA testing individuals instead of having to test them or their offspring for the trait. There are a bunch of statistical models that can be used for genomic prediction. Now, the choice of prediction algorithm is probably not the most important part of genomic selection, but bear with me.

First, we need some data. For this example, I used AlphaSim (Faux & al 2016), and the AlphaSim graphical user interface, to simulate a toy breeding population. We simulate 10 chromosomes á 100 cM, with 100 additively acting causal variants and 2000 genetic markers per chromosome. The initial genotypes come from neutral simulations. We run one generation of random mating, then three generations of selection on trait values. Each generation has 1000 individuals, with 25 males and 500 females breeding.

So we’re talking a small-ish population with a lot of relatedness and reproductive skew on the male side. We will use the two first two generations of selection (2000 individuals) to train, and try to predict the breeding values of the fourth generation (1000 individuals). Let’s use two of the typical mixed models used for genomic selection, and two tree methods.

We start by splitting the dataset and centring the genotypes by subtracting the mean of each column. Centring will not change predictions, but it may help with fitting the models (Strandén & Christensen 2011).

Let’s begin with the workhorse of genomic prediction: the linear mixed model where all marker coefficients are drawn from a normal distribution. This works out to be the same as GBLUP, the GCTA model, GREML, … a beloved child has many names. We can fit it with the R package BGLR. If we predict values for the held-out testing generation and compare with the real (simulated) values, it looks like this. The first panel shows a comparison with phenotypes, and the second with breeding values.

This gives us correlations of 0.49 between prediction and phenotype, and 0.77 between prediction and breeding value.

This is a plot of the Markov chain Monte Carlo we use to sample from the model. If a chain behaves well, it is supposed to have converged on the target distribution, and there is supposed to be low autocorrelation. Here is a trace plot of four chains for the marker variance (with the coda package). We try to be responsible Bayesian citizens and run the analysis multiple times, and with four chains we get very similar results from each of them, and a potential scale reduction factor of 1.01 (it should be close to 1 when it works). But the autocorrelation is high, so the chains do not explore the posterior distribution very efficiently.

BGLR can also fit a few of the ”Bayesian alphabet” variants of the mixed model. They put different priors on the distribution of marker coefficients allow for large effect variants. BayesB uses a mixture prior, where a lot of effects are assumed to be zero (Meuwissen, Hayes & Goddard 2001). The way we simulated the dataset is actually close to the BayesB model: a lot of variants have no effect. However, mixture models like BayesB are notoriously difficult to fit — and in this case, it clearly doesn’t work that well. The plots below show chains for two BayesB parameters, with potential scale reduction factors of 1.4 and 1.5. So, even if the model gives us the same accuracy as ridge regression (0.77), we can’t know if this reflects what BayesB could do.

On to the trees! Let’s try Random forest and Bayesian additive regression trees (BART). Regression trees make models as bifurcating trees. Something like the regression variant of: ”If the animal has a beak, check if it has a venomous spur. If it does, say that it’s a platypus. If it doesn’t, check whether it quacks like a duck …” The random forest makes a lot of trees on random subsets of the data, and combines the inferences from them. BART makes a sum of trees. Both a random forest (randomForest package) and a BART model on this dataset (fit with bartMachine package), gives a lower accuracy — 0.66 for random forest and 0.72 for BART. This is not so unexpected, because the strength of tree models seems to lie in capturing non-additive effects. And this dataset, by construction, has purely additive inheritance. Both BART and random forest have hyperparameters that one needs to set. I used package defaults for random forest, values that worked well for Waldmann (2016), but one probably should choose them by cross validation.

Finally, we can use classical quantitative genetics to estimate breeding values from the pedigree and relatives’ trait values. Fitting the so called animal model in two ways (pedigree package and MCMCglmm) give accuracies of 0.59 and 0.60.

So, in summary, we recover the common wisdom that the linear mixed model does the job well. It was more accurate than just pedigree, and a bit better than BART. Of course, the point of this post is not to make a fair comparison of methods. Also, the real magic of genomic selection, presumably, happens on every step of the way. How do you get to that neat individual-by-marker matrix in the first place, how do you deal with missing data and data from different sources, what and when do you measure, what do you do with the predictions … But you knew that already.

Journal club of one: ”An expanded view of complex traits: from polygenic to omnigenic”

An expanded view of complex traits: from polygenic to omnigenic” by Boyle, Yang & Pritchard (2017) came out recently in Cell. It has been all over Twitter, and I’m sure it will influence a lot of people’s thinking — rightfully so. It is a good read, pulls in a lot of threads, and has a nice blend of data analysis and reasoning. It’s good. Go read it!

The paper argues that for a lot of quantitative traits — specifically human diseases and height — almost every gene will be associated with every trait. More than that, almost every gene will be causally involved in every trait, most in indirect ways.

It continues with the kind of analysis used in Pickrell (2014), Finucane & al (2015) among many others, that break genome-wide association down down by genome annotation. How much variability can we attribute to variants in open chromatin regions? How much to genes annotated as ”protein bindning”? And so on.

These analyses point towards gene regulation being important, but not that strongly towards particular annotation terms or pathways. The authors take this to mean that, while genetic mapping, including GWAS, finds causally involved genes, it will not necessarily find ”relevant” genes. That is, not necessarily genes that are the central regulators of the trait. That may be a problem if you want to use genetic mapping to find drug targets, pathways to engineer, or similar.

This observation must speak to anyone who has looked at a list of genes from some mapping effort and thought: ”well, that is mostly genes we know nothing about … and something related to cancer”.

They write:

In summary, for a variety of traits, the largest-effect variants are modestly enriched in specific genes or pathways that may play direct roles in disease. However, the SNPs that contribute the bulk of the heritability tend to be spread across the genome and are not near genes with disease-specific functions. The clearest pattern is that the association signal is broadly enriched in regions that are transcriptionally active or involved in transcriptional regulation in disease-relevant cell types but absent from regions that are transcriptionally inactive in those cell types. For typical traits, huge numbers of variants contribute to heritability, in striking consistency with Fisher’s century-old infinitesimal model.

I summary: it’s universal pleiotropy. I don’t think there is any reason to settle on ”cellular” networks exclusively. After all, cells in a multicellular organism share a common pool of energy and nutrients, and exchange all kinds of signalling molecules. This agrees with classical models and the thinking in evolutionary genetics (see Rockman & Paaby 2013). Or look at this expression QTL and gene network study in aspen (Mähler & al 2017): the genes with eQTL tend to be peripheral, not network hub genes.

It’s a bit like in behaviour genetics, where people are fond of making up these elaborate hypothetical causal stories: if eyesight is heritable, and children with bad eyesight get glasses, and the way you treat a child who wears glasses somehow reinforces certain behaviours, so that children who wear glasses grow up to score a bit better on certain tests — are the eyesight variants also ”intelligence variants”? This is supposed to be a reductio ad absurdum of the idea of calling anything an ”intelligence variant” … But I suspect that this is what genetic causation, when fully laid out, will sometimes look like. It can be messy. It can involve elements that we don’t think of as ”relevant” to the trait.

There are caveats, of course:

One reason that there is a clearer enrichment of variant-level annotation such as open chromatin than in gene-level annotation may be that the resolution is higher. We don’t really know that much about how molecular variation translates to higher level trait variation. And let’s not forget that for most GWAS hits, we don’t know the causative gene.

They suggest defining ”core genes” like this: ”conditional on the genotype and expres-
sion levels of all core genes, the genotypes and expression levels of peripheral genes no longer matter”. Core genes are genes that d-separate the peripheral genes from a trait. That makes sense. Some small number of genes may be necessary molecular intermediates for a trait. But as far as I can tell, it doesn’t follow that useful biological information only comes from studying core genes, nor does it follow that we can easily tell if we’ve hit a core or a peripheral gene.

Also, there are quantitative genetics applications of GWAS data that are agnostic of pathways and genes. If we want to use genetics for prediction, for precision medicine etc, we do not really need to know the functions of the causative genes. We need big cohorts, well defined trait measurements, good coverage of genetic variants, and a good idea of environmental risk factors to feed into prediction models.

It’s pretty entertaining to see the popular articles about this paper, and the juxtaposition of quotes like ”that all those big, expensive genome-wide association studies may wind up being little more than a waste of time” (Gizmodo) with researchers taking the opportunity to bring up up their favourite hypotheses about missing heritability — even if it’s not the same people saying both things. Because if we want to study rare variants, or complex epistatic interactions, or epigenomics, or what have you, the studies will have to be just as big and expensive, probably even more so.

Just please don’t call it ”omnigenetics”.

Literature

Boyle, Evan A., Yang I. Li, and Jonathan K. Pritchard. ”An Expanded View of Complex Traits: From Polygenic to Omnigenic.” Cell 169.7 (2017): 1177-1186.