Using R: When using do in dplyr, don’t forget the dot

There will be a few posts about switching from plyr/reshape2 for data wrangling to the more contemporary dplyr/tidyr.

My most common use of plyr looked something like this: we take a data frame, split it by some column(s), and use an anonymous function to do something useful. The function takes a data frame and returns another data frame, both of which could very possibly have only one row. (If, in fact, it has to have only one row, I’d suggest an assert_that() call as the first line of the function.)

library(plyr)
results <- ddply(some_data, "key", function(x) {
  ## do something; return data.frame()
})

Or maybe, if I felt serious and thought the function would ever be used again, I’d write:

calculate <- function(x) {
  ## do something; return data.frame()
}
result <- ddply(some_data, "key", calculate)

Rinse and repeat over and over again. For me, discovering ddply was like discovering vectorization, but for data frames. Vectorization lets you think of operations on vectors, without having to think about their elements. ddply lets you think about operations on data frames, without having to think about rows and columns. It saves a lot of thinking.

The dplyr equivalent would be do(). It looks like this:

library(dplyr)
grouped <- group_by(some_data, key)
result <- do(grouped, calculate(.))

Or once again with magrittr:

library(magrittr)
some_data %>%
  group_by(key) %>%
  do(calculate(.)) -> result

(Yes, I used the assignment arrow from the left hand side to the right hand side. Roll your eyes all you want. I think it’s in keeping with the magrittr theme of reading from left to right.)

One important thing here, which got me at first: There has to be a dot! Just passing the function name, as one would have done with ddply, will not work:

grouped <- group_by(some_data, key)
## will not work: Error: Results are not data frames at positions ...
try(result <- do(grouped, calculate))

Don’t forget the dot!

Mutation, selection, and drift (with Shiny)

Imagine a gene that comes in two variants, where one of them is deleterious to the carrier. This is not so hard to imagine, and it is often the case. Most mutations don’t matter at all. Of those that matter, most are damaging.

Next, imagine that the mutation happens over and over again with some mutation rate. This is also not so hard. After all, given enough time, every possible DNA sequence should occur, as if by monkeys and typewriters. In this case, since we’re talking about the deleterious mutation rate, we don’t even need exactly the same DNA sequence to occur; rather, what is important is how often a class of mutations with the same consequences happen.

Let’s illustrate this with a Shiny app! I made this little thing that draws graphs like this:

This is supposed to show the trajectory of a deleterious genetic variant, with sliders to decide the population size, mutation rate, selection, dominance, and starting frequency. The lines are ten replicate populations, followed for 200 generations. The red line is the estimated equilibrium frequency — where the population would end up if it was infinitely large and not subject to random chance.

The app runs here: https://mrtnj.shinyapps.io/mutation/
And the code is here: https://github.com/mrtnj/shiny_mutation

(Note: I don’t know how well this will work if every blog reader clicks on that link. Maybe it all crashes or the bandwidth runs out or whatnot. If so, you can always download the code and run in RStudio.)

We assume diploid genetics, random mating, and mutation only in one direction (broken genes never restore themselves). As in typical population genetics texts, we call the working variant ”A” and the working variant ”a”, and their frequencies p and q. The genotypes AA, Aa and aa will have frequencies p^2 , 2 p q and q^2 before selection.

Damaging variants tend to be recessive, that is, they hurt only when you have two of them. Imagine an enzyme that makes some crucial biochemical product, that you need some but not a lot of. If you have one working copy of the enzyme, you may be perfectly fine, but if you are left without any working copy, you will have a deficit. We can describe this by a dominance coefficient called h. If the dominance coefficient is one, the variant is completely dominant, so that it damages you even if you only have one copy. If the dominance coefficient is zero, the variant is completely recessive, and having one copy of it does not affect you at all.

The average reproductive success (”fitness”) of each genotype is described in terms of selection coefficients, which tells us how much selection there is against a genotype. Selection coefficients range from 0, which means that you’re winning, to 1 which means that you’ve been completely out-competed. For a recessive damaging variant, the AA homozygotes and Aa heterozygotes are unaffected, but the aa homozygotes suffers selection coefficient s.

In the general case, fitness values for each genotype are 1 for AA, 1 - hs for Aa and 1 - s for aa. We can think of this as the probability of contributing to the next generation.

What about the red line in the graphs? If natural selection keeps removing a mutation from the gene pool, and mutation keeps adding it back in again there may be some equilibrium frequency where they cancel out, and the frequency of the damaging variant is more or less constant. This is called mutation–selection balance.

Haldane (1937) came up with an expression for the equilibrium variant frequency:

q_{eq} = \frac {h s + \mu - \sqrt{ (hs - \mu)^2 + 4 s \mu } } {2 h s - 2 s}

I’ve changed his notation a bit to use h and s for dominance and selection coefficient. \mu is the mutation rate. It’s not easy to see what is going on here, but we can draw it in the graph, and see that it’s usually very small. In these small populations, where drift is a major player, the variants are often completely lost, or drift to higher frequency by chance.

(I don’t know if I can recommend learning by playing with an app, but I definitely learned things while making it. For instance that C++11 won’t work on shinyapps.io unless you send the compiler a flag, and that it’s important to remember that both variants in a diploid organism can mutate. So I guess what I’m saying is: don’t use my app, but make your own. Or something.)

Literature

Haldane, J. B. S. ”The effect of variation of fitness.” The American Naturalist 71.735 (1937): 337-349.

Using R: a function that adds multiple ggplot2 layers

Another interesting thing that an R course participant identified: Sometimes one wants to make a function that returns multiple layers to be added to a ggplot2 plot. One could think that just adding them and returning would work, but it doesn’t. I think it has to do with how + is evaluated. There are a few workarounds that achieve similar results and may save typing.

First, some data to play with: this is a built-in dataset of chickens growing:

library(ggplot2)

data(ChickWeight)
diet1 <- subset(ChickWeight, Diet == 1)
diet2 <- subset(ChickWeight, Diet == 2)

This is just an example that shows the phenomenon. The first two functions will work, but combining them won’t.

add_line <- function(df) {
  geom_line(aes(x = Time, y = weight, group = Chick), data = df)
}

add_points <- function(df) {
  geom_point(aes(x = Time, y = weight), data = df)
}

add_line_points <- function(df) {
  add_line(df) + add_points(df)
}

## works
(plot1 <- ggplot() + add_line(diet1) + add_points(diet1))

## won't work: non-numeric argument to binary operator
try((plot2 <- ggplot() + add_line_points(diet1)))

Update: In the comments, Eric Pedersen gave a neat solution: stick the layers in a list and add the list. Like so:

(plot2.5 <- ggplot() + list(add_line(diet1), add_points(diet1)))

Nice! I did not know that one.

Also, you can get the same result by putting mappings and data in the ggplot function. This will work if all layers are going to plot the same data, but that does it for some cases:

## bypasses the issue by putting mappings in ggplot()
(plot3 <- ggplot(aes(x = Time, y = weight, group = Chick), data = diet1) +
    geom_line() + geom_point())

One way is to write a function that takes the plot object as input, and returns a modified version of it. If we use the pipe operator %>%, found in the magrittr package, it even gets a ggplot2 like feel:

## bypasses the issue and gives a similar feel with pipes

library(magrittr)

add_line_points2 <- function(plot, df, ...) {
  plot +
    geom_line(aes(x = Time, y = weight, group = Chick), ..., data = df) +
    geom_point(aes(x = Time, y = weight), ..., data = df)
}

(plot4 <- ggplot() %>% add_line_points2(diet1) %>%
   add_line_points2(diet2, colour = "red"))

Finally, in many cases, one can stick all the data in a combined data frame, and avoid building up the plot from different data frames altogether.

## plot the whole dataset at once
(plot5 <- ggplot(aes(x = Time, y = weight, group = Chick, colour = Diet),
                 data = ChickWeight) +
   geom_line() + geom_point())

Okay, maybe that plot is a bit too busy to be good. But note how the difference between plotting a single diet and all diets at the same time is just one more mapping in aes(). No looping or custom functions required.

I hope that was of some use.

Using R: Don’t save your workspace

To everyone learning R: Don’t save your workspace.

When you exit an R session, you’re faced with the question of whether or not to save your workspace. You should almost never answer yes. Saving your workspace creates an image of your current variables and functions, and saves them to a file called ”.RData”. When you re-open R from that working directory, the workspace will be loaded, and all these things will be available to you again. But you don’t want that, so don’t save your workspace.

Loading a saved workspace turns your R script from a program, where everything happens logically according to the plan that is the code, to something akin to a cardboard box taken down from the attic, full of assorted pages and notebooks that may or may not be what they seem to be. You end up having to put an inordinate trust in your old self. I don’t know about your old selves, dear reader, but if they are anything like mine, don’t save your workspace.

What should one do instead? One should source the script often, ideally from freshly minted R sessions, to make sure to always be working with a script that runs and does what it’s supposed to. Storing a data frame in the workspace can seem comforting, but what happens the day I overwrite it by mistake? Don’t save your workspace.

Yes, I’m exaggerating. When using any modern computer system, we rely on saved information and saved state all the time. And yes, every time a computation takes too much time to reproduce, one should write it to a file to load every time. But I that should be a deliberate choice, worthy of its own save() and load() calls, and certainly not something one does with simple stuff that can be reproduced a the blink of an eye. Put more trust in your script than in your memory, and don’t save your workspace.

It seems dplyr is overtaking correlation heatmaps

(… on my blog, that is.)

For a long time, my correlation heatmap with ggplot2 was the most viewed post on this blog. It still leads the overall top list, but by far the most searched and visited post nowadays is this one about dplyr (followed by it’s sibling about plyr).

I fully support this, since data wrangling and reorganization logically comes before plotting (especially in the ggplot2 philosophy).

But it’s also kind of a shame, because it’s not a very good dplyr post, and the one about the correlation heatmap is not a very good ggplot2 post. Thankfully, there is a new edition of the ggplot2 book by Hadley Wickham, and a new book by him and Garrett Grolemund about data analysis with modern R packages. I’m looking forward to reading them.

Personally, I still haven’t made the switch from plyr and reshape2 to dplyr and tidyr. But here is the updated tidyverse-using version of how to quickly calculate summary statistics from a data frame:

library(tidyr)
library(dplyr)
library(magrittr)

data <- data.frame(sex = c(rep(1, 1000), rep(2, 1000)),
                   treatment = rep(c(1, 2), 1000),
                   response1 = rnorm(2000, 0, 1),
                   response2 = rnorm(2000, 0, 1))

gather(data, response1, response2, value = "value", key = "variable") %>%
  group_by(sex, treatment, variable) %>%
  summarise(mean = mean(value), sd = sd(value))

Row by row we:

1-3: Load the packages.

5-8: Simulate some nonsense data.

10: Transform the simulated dataset to long form. This means that the two variables response1 and response2 get collected to one column, which will be called ”value”. The column ”key” will indicate which variable each row belongs to. (gather is tidyr’s version of melt.)

11: Group the resulting dataframe by sex, treatment and variable. (This is like the second argument to d*ply.)

12: Calculate the summary statistics.

Source: local data frame [8 x 5]
Groups: sex, treatment [?]

    sex treatment  variable        mean        sd
  (dbl)     (dbl)     (chr)       (dbl)     (dbl)
1     1         1 response1 -0.02806896 1.0400225
2     1         1 response2 -0.01822188 1.0350210
3     1         2 response1  0.06307962 1.0222481
4     1         2 response2 -0.01388931 0.9407992
5     2         1 response1 -0.06748091 0.9843697
6     2         1 response2  0.01269587 1.0189592
7     2         2 response1 -0.01399262 0.9696955
8     2         2 response2  0.10413442 0.9417059

Using R: tibbles and the t.test function

A participant in the R course I’m teaching showed me a case where a tbl_df (the new flavour of data frame provided by the tibble package; standard in new RStudio versions) interacts badly with the t.test function. I had not seen this happen before. The reason is this:

Interacting with legacy code
A handful of functions are don’t work with tibbles because they expect df[, 1] to return a vector, not a data frame. If you encounter one of these functions, use as.data.frame() to turn a tibble back to a data frame (tibble announcement on RStudio blog)

Here is code that reproduces the situation (tibble version 1.2):

data(chickwts)
chick_tibble <- as_tibble(chickwts)
casein <- subset(chickwts, feed == "casein")
sunflower <- subset(chick_tibble, feed == "sunflower")
t.test(sunflower$weight, casein$weight) ## this works
t.test(as.data.frame(sunflower[, 1]), as.data.frame(casein[, 1])) ## this works too
t.test(sunflower[, 1], casein[, 1]) ## this doesn't

Error: Unsupported use of matrix or array for column indexing

I did not know that. The solution, which they found themselves, is to use as.data.frame.

I can see why not dropping to a vector makes sense. I’m sure you’ve at some point expected a data frame and got an ”$ operator is invalid for atomic vectors”. But it’s an unfortunate fact that number of weird little thingamajigs to remember is always strictly increasing as the language evolves. And it’s a bit annoying that the standard RStudio setup breaks code that uses an old stats function, even if it’s in somewhat non-obvious way.

Balancing a centrifuge

I saw this cute little paper on arxiv about balancing a centrifuge: Peil & Hauryliuk (2010) A new spin on spinning your samples: balancing rotors in a non-trivial manner. Let us have a look at the maths of balancing a centrifuge.

The way I think most people (including myself) balance their samples is to put them opposite of each other, just like Peil & Hauryliuk write. However, there are many more balanced configurations, some of which look really weird. The authors generate three balanced configurations with increasing oddity, show them to researchers and ask them whether they are balanced. About half, 30% and 15% of them identified each configuration as balanced. Here are the configurations:

configuration_plot
(Drawn after their paper.)

Take a rotor in a usual bench top centrifuge. It’s a large, in itself balanced, piece of metal with holes to put microcentrifuge tubes in. We assume that all tubes have the same mass m and that the holes are equally spaced. The rotor will spin around its own axis, helping us separate samples and pellet precipitates etc. When the centrifuge is balanced, the centre of mass of the samples will be aligned with the axis of rotation. So, if we place a two-dimensional coordinate system on the axis of rotation like so,

koordinater

the tubes are positioned on a circle around it:

x_i = r \cos {\theta_i}
y_i = r \sin {\theta_i}

The angle to each position in the rotor will be

\theta(i) = \dfrac{2\pi(i - 1)}{N}

where i is the position in question, starting at 1, and N the number of positions in the rotor. Let’s label each configuration by the numbers of the positions that are occupied. So we could talk about (1, 16)30 as the common balanced pair of tubes in a 30-position rotor. (Yeah, I know, counting from 1 is a lot more confusing than counting from zero. Let’s view it as a kind of practice for dealing with genomic coordinates.)

We express the position of each tube (treated as a point mass) as a vector. Since we put the origin on the axis of rotation, these vectors have to sum to zero for the centrifuge to be balanced.

\sum \limits_{i} {m\mathbf{r_i}} = \mathbf{0}

Since the masses are equal, they can be removed, as can the radius, which is constant, and we can consider the x and y coordinates separately.

\left(\begin{array}{c} \sum \limits_{i} {\cos {\theta(i)}} \\ \sum \limits_{i} {\sin {\theta(i)}} \end{array}\right) = \left(\begin{array}{c} 0 \\ 0 \end{array}\right)

For the (1, 16)30 configuration, the vectors are

\left(\begin{array}{c} \cos {\theta(1)} \\ \sin {\theta(1)} \end{array}\right) + \left(\begin{array}{c} \cos {\theta(16)} \\ \sin {\theta(16)} \end{array}\right) = \left(\begin{array}{c} \cos {0} \\ \sin {0} \end{array}\right) + \left(\begin{array}{c} \cos {\pi} \\ \sin {\pi} \end{array}\right) = \left(\begin{array}{c} 1 \\ 0 \end{array}\right) + \left(\begin{array}{c} -1 \\ 0 \end{array}\right)

So we haven’t been deluding ourselves. This configuration is balanced. That is about as much maths as I’m prepared to do in LaTex in a WordPress blog editor. So let’s implement this in R code:

library(magrittr)
theta <- function(n, N) (n - 1) * 2 * pi / N
tube <- function(theta) c(cos(theta), sin(theta))

Now, we can look at Peil & Hauryliuk’s configurations, for instance the first (1, 11, 14, 15, 21, 29, 30)30

positions <- c(1, 11, 14, 15, 21, 29, 30)
tubes <- positions %>% lapply(theta, N = 30) %>% lapply(tube)
c(sum(unlist(lapply(tubes, function(x) x[1]))),
  sum(unlist(lapply(tubes, function(x) x[2]))))

The above code 1) defines the configuration; 2) turns positions into angles and then tube coordinates; and 3) sums the x and y coordinates separately. The result isn’t exactly zero (for computational reasons), but close enough. Putting in their third configuration, (4, 8, 14, 15, 21, 27, 28)30, we again get almost zero. Even this strange-looking configuration seems to be balanced.

I’m biased because I read the text first, but if someone asked me, I would have to think about the first two configurations, and there is no way I would allow a student to run with the third if I saw it in the lab. That conservative attitude, though not completely scientific, might not be the worst thing. Centrifuge accidents are serious business, and as the authors note:

Finally, non-symmetric arrangement (Fig. 1C) was recognized as balanced by 17% of researchers. Some of these were actually calculating moment of inertia, i.e. were coming to solution knowingly, the rest where basically guessing. The latter should be banished from laboratory practice, since these people are ready to make dangerous decisions without actual understanding of the case, which renders them extremely dangerous in the laboratory settings.

(Plotting code for the first figure is on Github.)