Balancing a centrifuge

I saw this cute little paper on arxiv about balancing a centrifuge: Peil & Hauryliuk (2010) A new spin on spinning your samples: balancing rotors in a non-trivial manner. Let us have a look at the maths of balancing a centrifuge.

The way I think most people (including myself) balance their samples is to put them opposite of each other, just like Peil & Hauryliuk write. However, there are many more balanced configurations, some of which look really weird. The authors generate three balanced configurations with increasing oddity, show them to researchers and ask them whether they are balanced. About half, 30% and 15% of them identified each configuration as balanced. Here are the configurations:

configuration_plot
(Drawn after their paper.)

Take a rotor in a usual bench top centrifuge. It’s a large, in itself balanced, piece of metal with holes to put microcentrifuge tubes in. We assume that all tubes have the same mass m and that the holes are equally spaced. The rotor will spin around its own axis, helping us separate samples and pellet precipitates etc. When the centrifuge is balanced, the centre of mass of the samples will be aligned with the axis of rotation. So, if we place a two-dimensional coordinate system on the axis of rotation like so,

koordinater

the tubes are positioned on a circle around it:

x_i = r \cos {\theta_i}
y_i = r \sin {\theta_i}

The angle to each position in the rotor will be

\theta(i) = \dfrac{2\pi(i - 1)}{N}

where i is the position in question, starting at 1, and N the number of positions in the rotor. Let’s label each configuration by the numbers of the positions that are occupied. So we could talk about (1, 16)30 as the common balanced pair of tubes in a 30-position rotor. (Yeah, I know, counting from 1 is a lot more confusing than counting from zero. Let’s view it as a kind of practice for dealing with genomic coordinates.)

We express the position of each tube (treated as a point mass) as a vector. Since we put the origin on the axis of rotation, these vectors have to sum to zero for the centrifuge to be balanced.

\sum \limits_{i} {m\mathbf{r_i}} = \mathbf{0}

Since the masses are equal, they can be removed, as can the radius, which is constant, and we can consider the x and y coordinates separately.

\left(\begin{array}{c} \sum \limits_{i} {\cos {\theta(i)}} \\ \sum \limits_{i} {\sin {\theta(i)}} \end{array}\right) = \left(\begin{array}{c} 0 \\ 0 \end{array}\right)

For the (1, 16)30 configuration, the vectors are

\left(\begin{array}{c} \cos {\theta(1)} \\ \sin {\theta(1)} \end{array}\right) + \left(\begin{array}{c} \cos {\theta(16)} \\ \sin {\theta(16)} \end{array}\right) = \left(\begin{array}{c} \cos {0} \\ \sin {0} \end{array}\right) + \left(\begin{array}{c} \cos {\pi} \\ \sin {\pi} \end{array}\right) = \left(\begin{array}{c} 1 \\ 0 \end{array}\right) + \left(\begin{array}{c} -1 \\ 0 \end{array}\right)

So we haven’t been deluding ourselves. This configuration is balanced. That is about as much maths as I’m prepared to do in LaTex in a WordPress blog editor. So let’s implement this in R code:

library(magrittr)
theta <- function(n, N) (n - 1) * 2 * pi / N
tube <- function(theta) c(cos(theta), sin(theta))

Now, we can look at Peil & Hauryliuk’s configurations, for instance the first (1, 11, 14, 15, 21, 29, 30)30

positions <- c(1, 11, 14, 15, 21, 29, 30)
tubes <- positions %>% lapply(theta, N = 30) %>% lapply(tube)
c(sum(unlist(lapply(tubes, function(x) x[1]))),
  sum(unlist(lapply(tubes, function(x) x[2]))))

The above code 1) defines the configuration; 2) turns positions into angles and then tube coordinates; and 3) sums the x and y coordinates separately. The result isn’t exactly zero (for computational reasons), but close enough. Putting in their third configuration, (4, 8, 14, 15, 21, 27, 28)30, we again get almost zero. Even this strange-looking configuration seems to be balanced.

I’m biased because I read the text first, but if someone asked me, I would have to think about the first two configurations, and there is no way I would allow a student to run with the third if I saw it in the lab. That conservative attitude, though not completely scientific, might not be the worst thing. Centrifuge accidents are serious business, and as the authors note:

Finally, non-symmetric arrangement (Fig. 1C) was recognized as balanced by 17% of researchers. Some of these were actually calculating moment of inertia, i.e. were coming to solution knowingly, the rest where basically guessing. The latter should be banished from laboratory practice, since these people are ready to make dangerous decisions without actual understanding of the case, which renders them extremely dangerous in the laboratory settings.

(Plotting code for the first figure is on Github.)

På dna-dagen: Genetik utan dna

Så här på dna-dagen tänkte jag skriva lite om vad som går att göra utan att veta något om dna, och varför det (förstås) blir ännu bättre med dna.

Vi tänker oss tillbaka till tiden före genomprojekt, sekvenseringsmaskiner och kloning. Säg någon gång i 1900-talets början. Vad vet vi om ärftlighet? Vi vet att egenskaper går i arv. Det behöver man inte vara något ljushuvud för att lägga märke till. Vi vet att djuravel och växtförädling fungerar. Det vill säga, om man väljer ut de individer som har egenskaper vi behöver och låter dem para sig, så kommer nästa generation bli ännu bättre. Det är 1700-talskunskap allra minst, och förmodligen mycket äldre än så. Vi har en teori om ärftlighet, från Mendel och hans berömda ärtor  Vi vet att ärftligheten består av anlag som blandas om varje generation, men utan att spädas ut. Det är de som idag kallas genetiska varianter.

Varje individ har en uppsättning genetiska varianter. De påverkar individens egenskaper, och de går vidare till nästa generation när individen får barn. Modernt uttryckt: Alla individer har gener, och de finns i olika genetiska varianter. Alla har två varianter av varje gen, och en av dem kommer gå vidare till varje avkomma. Vi vet ännu inte vad generna består av (spoiler: det är dna).

En del egenskaper är enkla, och verkar styras av varianter av en enda gen. En höna kan ha vita fjädrar eller färgade, till exempel; det styrs av varianter av en enda gen. Den ena varianten gör hönan vita fjädrar, och den andra tillåter andra färger att komma fram. Egenskapen delar in höns i två typer: vita höns och höns med andra färger.

Men de flesta egenskaper är inte så enkla. Ta hur mycket hönan väger. Höns kommer i alla storlekar, små, stora och mittemellan. Det är ofta sådana egenskaper som är viktigast. Hur stor blir hönan? Hur många ägg lägger hon? Hur rädd är hon för människor? Och så vidare. Hur gör vi om vi vill förstå en sådan kvantitativ egenskap?

Vi utgår ifrån det faktum att det finns många gener som påverkar en kvantitativ egenskap. Varje gen finns i flera varianter, och varje individ har två varianter. Vi börjar med att anta något helt orealistiskt, nämligen att vi vet exakt vilka genetiska varianter som finns, och vilka effekter de har. Då kan vi skriva ner en individs egenskap som en summa, där varje term beror på de genetiska varianter individen bär på, plus ett slumpvis bidrag från olika miljöfaktorer. Därifrån kan vi dra slutsatser om medelvärden och variation inom en population av individer och, viktigast av allt, formler för hur nära släktingar liknar varandra.

Då trillar det ut något användbart. De här orealistiska sakerna vi antog i början, att vi kände till varje genetisk variant och vad den gör, visar det sig att vi inte behöver veta. De försvinner ur formlerna. Det är som om vi hade en ekvation med X på båda sidor om likhetstecknet. Då kan vi dividera med X, så den okända variabeln försvinner. Med bara ett släktträd och mätningar av individernas egenskaper går det att räkna fram en massa användbara genetiska värden, utan att behöva veta exakt vilken gen som gör vad. Till exempel kan vi ta reda på vilka individer vi helst borde avla på, eller hur stor del genetiska varianter spelar för en viss egenskap.

Teorin om kvantitativa egenskaper utvecklades i början av 1900-talet. Det är en statistisk teori, som beskriver hur ärftliga egenskaper förs vidare i släktträd och populationer. Den är väl medveten om att det finns gener och genetiska varianter, men klarar sig bra utan att hantera dem direkt.

Ungefär vid samma tid började helt andra forskare reda ut vad arvsanlagen består av. 1953 visste vi inte bara att är dna som är boven i dramat, utan också hur dna-molekylen ser ut. Sedan kom molekylärgenetik, det vill säga genetik som arbetar direkt med dna.

På senare år har kvantitativ genetik och molekylärgenetik mötts på flera sätt. Det har blivit så lätt och billigt (relativt sätt) att göra dna-tester, att många börjat använda dem istället för släktträd. Istället för att mödosamt hålla reda på individers släktskap kan vi titta på deras genetiska varianter direkt, och uppskatta släktträdet från dna.

Det har också blivit möjligt att ta reda på vilka genetiska varianter som påverkar egenskaper, hur mycket de påverkar, och hur de fungerar. Då kan vi får reda på saker som inte syns i släktträd: hur många genetiska varianter som spelar roll för en egenskap, om varianterna är vanliga eller ovanliga, hur stora eller små deras effekter är, och hur de åstadkommer dem. Hur kan små skillnader i dna göra en höna större eller mindre, mer eller mindre rädd för människor, eller få henne att lägga fler eller färre ägg? Men det får vi prata mer om en annan dag.

(Idag var det tydligen dna-dagen, även om den snart är slut. Gamla dna-dagsposter: Gener, orsak och verkan (2015), På dna-dagen (2014))

Undervisning: Molekylärgenetik

NBIC45 utgår! Leve NBIC52! Den senaste varianten av molekylärgenetikkursen har just börjat. Nu var det inte tänkt att jag skulle undervisa något i år, men jag hoppar in som ställföreträdande skägg. Så läraruppställningen ändras lite mindre än det var tänkt från början.

Provrörsställ, rör, lösningar, pipetter och blåsippor som inte har med saken att göra.

Laborationerna, där en kan träffa mig, handlar om nöjsamma saker som genotypning med polymeraskedjereaktionen och att transformera bakterier med plasmider. Och att tolka inte alltid helt tydliga band på geler, samt stå i kö till centrifugen. Jag tycker det är rätt roligt. Att stå i kö till centrifugen är kanske inte det roligaste i världen. Men alla som arbetat i ett molekylärt laboratorium kan intyga att det i alla fall är realistiskt.

Jag har skrivit (och twittrat) något om innehållet i labbarna förut.

Toying with models: The Game of Life with selection

Conway’s Game of life is probably the most famous cellular automaton, consisting of a grid of cells developing according simple rules. Today, we’re going to add mutation and selection to the game, and let patterns evolve.

The fate of a cell depends on the number cells that live in the of neighbouring positions. A cell with fewer than two neighbours die from starvation. A cell with more than three neighbours die from overpopulation. If a position is empty and has three neighbours, it will be filled by a cell. These rules lead to some interesting patterns, such as still lives that never change, oscillators that alternate between states, patterns that eventually die out but take long time to do so, patterns that keep generating new cells, and so forth.

oscillators still_life

When I played with the Game of life when I was a child, I liked one pattern called ”virus”, that looked a bit like this. On its own, a grid of four-by-four blocks is a still life, but add one cell (the virus), and the whole pattern breaks. This is a version on a 30 x 30 cell board. It unfolds rather slowly, but in the end, a glider collides with a block, and you are left with some oscillators.

blocks virus

There are probably other interesting ways that evolution could be added to the game of life. We will take a hierarchical approach where the game is taken to describe development, and the unit of selection is the pattern. Each generation, we will create a variable population of patterns, allow them to develop and pick the fittest. So, here the term ”development” refers to what happens to a pattern when applying the rules of life, and the term ”evolution” refers to how the population of patterns change over the generations. This differ slightly from Game of life terminology, where ”evolution” and ”generation” usually refer to the development of a pattern, but it is consistent with how biologists use the words: development takes place during the life of an organism, and evolution happens over the generations as organisms reproduce and pass on their genes to offspring. I don’t think there’s any deep analogy here, but we can think of the initial state of the board as the heritable material that is being passed on and occasionally mutated. We let the pattern develop, and at some point, we apply selection.

First, we need an implementation of the game of life in R. We will represent the board as a matrix of ones (live cells) and zeroes (empty positions). Here is function develops the board one tick in time. After dealing with the corners and edges, it’s very short, but also slow as molasses. The next function does this for a given number of ticks.

## Develop one tick. Return new board matrix.
develop <- function(board_matrix) {
  padded <- rbind(matrix(0, nrow = 1, ncol = ncol(board_matrix) + 2),
                  cbind(matrix(0, ncol = 1, nrow = nrow(board_matrix)), 
                        board_matrix,
                        matrix(0, ncol = 1, nrow = nrow(board_matrix))),
                  matrix(0, nrow = 1, ncol = ncol(board_matrix) + 2))
  new_board <- padded
  for (i in 2:(nrow(padded) - 1)) {
    for (j in 2:(ncol(padded) - 1)) {
      neighbours <- sum(padded[(i-1):(i+1), (j-1):(j+1)]) - padded[i, j]
      if (neighbours < 2 | neighbours > 3) {
        new_board[i, j] <- 0
      }
      if (neighbours == 3) {
        new_board[i, j] <- 1
      }
    }
  }
  new_board[2:(nrow(padded) - 1), 2:(ncol(padded) - 1)]
}

## Develop a board a given number of ticks.
tick <- function(board_matrix, ticks) {
  if (ticks > 0) {
    for (i in 1:ticks) {
      board_matrix <- develop(board_matrix) 
    }
  }
  board_matrix
}

We introduce random mutations to the board. We will use a mutation rate of 0.0011 per cell, which gives us a mean of a bout one mutation for a 30 x 30 board.

## Mutate a board
mutate <- function(board_matrix, mutation_rate) {
  mutated <- as.vector(board_matrix)
  outcomes <- rbinom(n = length(mutated), size = 1, prob = mutation_rate)
  for (i in 1:length(outcomes)) {
    if (outcomes[i] == 1)
      mutated[i] <- ifelse(mutated[i] == 0, 1, 0)
  }
  matrix(mutated, ncol = ncol(board_matrix), nrow = nrow(board_matrix))
}

I was interested in the virus pattern, so I decided to apply a simple directional selection scheme for number of cells at tick 80, which is a while after the virus pattern has stabilized itself into oscillators. We will count the number of cells at tick 80 and call that ”fitness”, even if it actually isn’t (it is a trait that affects fitness by virtue of the fact that we select on it). We will allow the top half of the population to produce two offspring each, thus keeping the population size constant at 100 individuals.

## Calculates the fitness of an individual at a given time
get_fitness <- function(board_matrix, time) {
  board_matrix %>% tick(time) %>% sum
}

## Develop a generation and calculate fitness
grow <- function(generation) {
  generation$fitness <- sapply(generation$board, get_fitness, time = 80)
  generation
}

## Select a generation based on fitness, and create the next generation,
## adding mutation.
next_generation <- function(generation) {
  keep <- order(generation$fitness, decreasing = TRUE)[1:50]
  new_generation <- list(board = vector(mode = "list", length = 100),
                         fitness = numeric(100))
  ix <- rep(keep, each = 2)
  for (i in 1:100) new_generation$board[[i]] <- generation$board[[ix[i]]]
  new_generation$board <- lapply(new_generation$board, mutate, mutation_rate = mu)
  new_generation
}

## Evolve a board, with mutation and selection for a number of generation.
evolve <- function(board, n_gen = 10) { 
  generations <- vector(mode = "list", length = n_gen)

  generations[[1]] <- list(board = vector(mode = "list", length = 100),
                           fitness = numeric(100))
  for (i in 1:100) generations[[1]]$board[[i]] <- board
  generations[[1]]$board <- lapply(generations[[1]]$board, mutate, mutation_rate = mu)

  for (i in 1:(n_gen - 1)) {
    generations[[i]] <- grow(generations[[i]])
    generations[[i + 1]] <- next_generation(generations[[i]])
  }
  generations[[n_gen]] <- grow(generations[[n_gen]])
  generations
}

Let me now tell you that I was almost completely wrong about what happens with this pattern once you apply selection. I thought that the initial pattern of nine stable blocks (36 cells) was pretty good, and that it would be preserved for long, and that virus-like patterns (like the first animation above) would mostly have degenerated around 80. As this plot of the evolution of the number of cells in one replicate shows, I grossly underestimated this pattern. The y-axis is number of cells at time 80, and the x-axis individuals, the vertical lines separating generations. Already by generation five, most individuals do better than 36 cells in this case:

blocks_trajectory_plot

As one example, here is the starting position and the state at time 80 for a couple of individuals from generation 10 of one of my replicates:

blocks_g10_1 blocks_g10_80

blocks_g10_1b blocks_g10_80b

Here is how the average cell number at time 80 evolves in five replicates. Clearly, things are still going on at generation 10, not only in the replicate shown above.

mean_fitness_blocks

Here is the same plot for the virus pattern I showed above, i.e. the blocks but with one single added cell, fixed in the starting population. Prior genetic architecture matters. Even if the virus pattern has fewer cells than the blocks pattern at time 80, it is apparently a better starting point to quickly evolve more cells:

mean_fitness_virus

And finally, out of curiosity, what happens if we start with an empty 30 x 30 board?

mean_fitness_blank

Not much. The simple still life block evolves a lot. But in my replicate three, this creature emerged. ”Life, uh, finds a way.”

blank_denovo

Unfortunately, many of the selected patterns extended to the edges of the board, making them play not precisely the game of life, but the game of life with edge effects. I’d like to use a much bigger board and see how far patterns extend. It would also be fun to follow them longer. To do that, I would need to implement a more efficient way to update the board (this is very possible, but I was lazy). It would also be fun to select for something more complex, with multiple fitness components, potentially in conflict, e.g. favouring patterns that grow large at a later time while being as small as possible at an earlier time.

Code is on github, including functions to display and animate boards with the animation package and ImageMagick, and code for the plots. Again, the blocks_selection.R script is slow, so leave it running and go do something else.

Toying with models: The Luria–Delbrück fluctuation test

I hope that Genetics will continue running expository papers about their old classics, like this one by Philip Meneely about Luria & Delbrück (1943). Luria & Delbrück performed an experiment on bacteriophage resistance in Escherichia coli, growing bacterial cultures, exposing them to a phage, and then plating and counting the survivors, who have become resistant to the phage. They considered two hypotheses: either resistance occurs adaptively, in response to the phage, or it occurs by mutation some time during the growth of the culture but before the phages are added. They find the latter to be the case, and this is an example of how mutations happen irrespective of their effects of fitness, in a sense at random. Their analysis is based on a model of bacterial growth and mutation, and the aim of this exercise is to explore this model by simulating some data.

First, we assume that mutation happens with a fixed mutation rate \mu = 2 \cdot 10^{-8} , which is quite close to their estimated value, and that the mutation can’t reverse. We also assume that the bacteria grow by doubling each generation up to 30 generations. We start a culture from a single susceptible bacterium, and let it grow for a number of generations before the phage is added. (We’re going to use discrete generations, while Luria & Delbrück use a continuous function.) Then:

n_{susceptible,i+1}= 2 (n_{susceptible,i} - n_{mutants,i})

n_{resistant,i+1} = 2 (n_{resistant,i} + n_{mutants,i})

That is, every generation i, the mutants that occur move from the susceptible to the resistant category. The number of mutants that happen among the susceptible is binomially distributed:

n_{mutants,i} \sim Binomial(n_{susceptible,i}, \mu) .

This is an R function to simulate a culture:

culture <- function(generations, mu) {
  n_susceptible <- numeric(generations)
  n_resistant <- numeric(generations)
  n_mutants <- numeric(generations)
  n_susceptible[1] <- 1
  for (i in 1:(generations - 1)) {
    n_mutants[i] <- rbinom(n = 1, size = n_susceptible[i], prob = mu)
    n_susceptible[i + 1] &lt;- 2 * (n_susceptible[i] - n_mutants[i])
    n_resistant[i + 1] &lt;- 2 * (n_resistant[i] + n_mutants[i])
  }
  data.frame(generation = 1:generations,
             n_susceptible,
             n_resistant,
             n_mutants)
}
cultures <- replicate(1000, culture(30, 2e-8), simplify = FALSE)

We run a few replicate cultures and plot the number of resistant bacteria. This graph shows the point pretty well: Because of random mutation and exponential growth, the cultures where mutations happen to arise relatively early will give rise to a lot more resistant bacteria than the ones were the first mutations are late. Therefore, there will be a lot of variation between the cultures because of their different histories.

resistant

combined <- Reduce(function (x, y) rbind(x, y), cultures)
combined$culture <- rep(1:1000, each = 30)

resistant_plot <- qplot(x = generation, y = n_resistant, group = culture,
      data = combined, geom = "line", alpha = I(1/10), size = I(1)) + theme_bw()

We compare this to what happens under the alternative hypothesis where resistance arises as a consequence of introduction of the phage with some resistance rate (this is not the same as the mutation rate above, even though we’re using the same value). Then the number of resistant cells in a culture will be: n_{acquired} \sim Binomial(2^{29}, \mu_{aquried}) .

resistant <- unlist(lapply(cultures, function(x) max(x$n_resistant)))

acquired_resistant <- rbinom(n = 1000, size = 2^29, 2e-8)

resistant_combined <- rbind(transform(data.frame(resistant = acquired_resistant), model = "acquired"),
                            transform(data.frame(resistant = resistant), model = "mutation"))

resistant_histograms <- qplot(x = resistant, data = resistant_combined,bins = 10) +
  facet_wrap(~ model, scale = "free_x")

histograms

Here are two histograms side by side to compare the cases. The important thing is the shape. If the acquired resistance hypothesis holds, the number of resistant bacteria in replicate cultures follows a Poisson distribution, because it arises when one counts the number of binomially distributed events that occur in a given number of trials. The interesting thing about the Poisson distribution in this case is that its mean is equal to the variance. However, under the mutation model (as we’ve already illustrated), there is a lot of variation between cultures. These fluctuations make the variance much larger than the mean, which is also what Luria and Delbrück found in their data. Therefore, the results are inconsistent with acquired mutation, and hence the experiment is called the Luria–Delbrück fluctuation test.

mean(resistant)
var(resistant)
mean(acquired_resistant)
var(acquired_resistant)

Literature

Luria, S. E., & Delbrück, M. (1943). Mutations of bacteria from virus sensitivity to virus resistance. Genetics, 28(6), 491.

Meneely, P. M. (2016). Pick Your Poisson: An Educational Primer for Luria and Delbrück’s Classic Paper. Genetics, 202(2), 371-375.

Code on github.

Some things that happened last year (in pictures and links)

Some things happened in 2015. For one thing, I defended my PhD on the 18th of December. I’m still waiting for some paperwork so I can get a nice diploma, but dear reader, for most intents and purposes, I’m a doctor of genetics! How did that happen? Here is a short chronicle of the year in mobile phone pictures. Enjoy!

In May I went to the conference that is either called the IBANGS meeting or the GBB; I’m not sure which is the proper name, actually. Regardless, it is the meeting of the International Behavior and Neural Genetics Society, and last year it took place in Uppsala. One must take the opportunity when the meeting is in Sweden. I had a poster about some of our work on behavioural genomics under chicken domestication (Johnsson et al. 2016) and also featured on a couple of posters about dog behaviour.

I’ve mostly been to genetics and evolution centric conferences before, so themes and perspectives were a bit new to me. There was quite a bit of neural circuitry and addiction studies, but also a big chunk of genomics-type research. I was surprised by the relative lack of quantitative genetics. I had expected more of that, particularly from the human side. Maybe psychiatric geneticists go to other conferences.

Summer and time to write my dissertation. In fact, since the dissertation was a compilation thesis under the Swedish system, it consisted of a bunch of papers stapled together with a summary (”kappa”, which think comes from the Swedish word for overcoat, not from the Greek letter). Since the papers were either published or quite close to being submitted at this time, most of the writing actually lay behind me. Anyway, I decided to go to some place without distractions for a few days to write. I went to Östersund, where I don’t know anyone, had nothing else to do, and wrote the draft while sitting on cafés and in the candle-lit (!) hotel dining room.

And in August: ESEB! The European Society for Evolutionary Biology meeting took place in Lausanne. That picture with all the greenery, by the way, it’s from the campus!

I gave a talk about our work with the Kauai chicken population (Gering et al. 2015, and unpublished stuff). I remember looking at the list of symposia early in the year and thinking that they looked less interesting than previous years … I don’t remember why, though, and it turns out that I was all wrong, and ended up overwhelmed with talks to go to. As usual, and as it should be. Also: my travel laptop died on me (of course!) and I took this picture with a trash can. Good times.

Speaking of silly selfies: here is me in the chicken house and in the lab. The hammer is the actual RNA isolation hammer; we’ve found that for a lot of frozen tissues, dipping the piece in liquid nitrogen, putting it in a bag and hitting it with a hammer is a good  way to quickly disrupt it before putting it into a homogenizer. The pink tube contains Tri (acid phenol) extraction for RNA isolation. It’s one of the most photogenic steps (one of the few where what’s in the tube isn’t just a colourless liquid).

Winter and time to print and defend the dissertation. Alexander Hultberg made the pretty chicken on the cover. Apparently I took two photos from my defence: one of the opponent, Dirk-Jan de Koning, explaining QTL mapping, and one of his summary slide.

I got a hat. The middle picture shows Dominic Wright giving a speech. I felt very touched.

Finally, Linköping University in winter, and a picture of my sister’s dog, because you know: dog.

I may eventually get back to regular posts about science, or not. In the meantime, here are links to new papers that came out in 2015 (or early 2016) and I had something to do with. Apparently, this was also a year when a lot of stuff went from pipeline into print.

Johnsson M, Williams MJ, Jensen P, Wright D. (2016) Genetical Genomics of Behavior: A novel chicken genomic model for anxiety behavior. Genetics.

Johnsson M, Jonsson KB, Andersson L, Jensen P, Wright D. (2016) Quantitative Trait Locus and Genetical Genomics Analysis Identifies Putatively Causal Genes for Fecundity and Brooding in the Chicken. G3.

Fallahsharoudi A, de Kock N, Johnsson M, Ubhayasekera SJ, Bergquist J, Wright D, Jensen P. (2015) Domestication Effects on Stress Induced Steroid Secretion and Adrenal Gene Expression in Chickens. Scientific reports.

Johnsson M, Jonsson KB, Andersson L, Jensen P, Wright D. (2015) Genetic Regulation of Bone Metabolism in the Chicken: Similarities and Differences to Mammalian Systems. PLoS Genetics.

Persson, M. E., Roth, L. S., Johnsson, M., Wright, D., & Jensen, P. (2015). Human‐directed social behaviour in dogs shows significant heritability. Genes, Brain and Behavior.

The paper on the genetics of open field behaviour made the rounds a bit in the media, which was fun. I’m a bit surprised to say that an article in The Daily Mail had the most interesting take … There was also a Nature News feature from collaborators’ field trip to Kauai in the fall, complete with a looping video of my supervisor Dom Wright and collaborator Eben Gering trying to catch a chicken. This is a must-see.

Richard Gray. Forget calling someone a chicken! Farm birds have BRAVERY genes that make them less anxious than their wild cousins The Daily Mail.

Ewen Callaway. When chickens go wild. Nature News.

Last year in Marseille and the EBM18 book

The EBM in Marseille was about a year ago (September 2014), but I don’t mind a bit of blog anachronism. I post this from the European Society for Evolutionary Biology conference in Lausanne. If you happen to be here, you can see me talk about signatures of selection in feralisation in Symposium 20 on Tuesday afternoon.

If you saw a bearded man carrying a pink bag scrambling towards Gare de Marseille Saint-Charles (pictured below; an incredibly beautiful train station) while eating boiled potatoes from a plastic bag, you may have witnessed my stylish departure from Marseille. This was during the Air France strike, and I had just learned that I could catch a train to Nice and go from there. In a moment of brilliance or cheapness, I also decided to spend the night at the airport Nice Côte d’Azur.

IMG_20140915_153433

The conference itself was nothing short of wonderful. There were many interesting talks, but it was small enough that everything fit in one track, and there was plenty of time to meet people. The conference also ended with the nicest social activity at any conference I’ve been to: a group of participants went for a walk around Marseille with Pierre Pontarotti and his cute little dog. Myself, I presented our comb size work (2012 paper, 2014 paper and some new stuff). I felt like it went rather well. It seems someone else in the scientific committee thought so too, because I got invited to write a chapter for the book with meeting participants that they make each year. The invitation was to write an overview of the field we talked about, so I wrote about ”The genomics of sexual ornamentation, gene identification and pleiotropy”. One can have a look at the chapter on Google Books. The chapter goes through genomic studies (mostly QTL mapping and gene expression microarrays) on sexual ornaments, and some of the problems and promises.

I am not really sure when the book came out; I saw it popping up on Google Scholar the other day, but I haven’t seen the final version of my chapter. I assume a book is on it’s way or waiting for me when I get back from ESEB.

Pontarotti Pierre (Ed). (2015) Evolutionary Biology: Biodiversification from Genotype to Phenotype. Springer.