I was invited to write a guest post for the blog Political Science Replication. For those who have been reading my blog for a while there’s not much new there, but feel free to check it out.

# Github for science

Quick post for a useful link: here’s a guide to setting up your Github repository with a Digital Object Identifier (DOI) so that it’s statically citable!

# Simulating data

Generating data from a probabalistic model with known parameters is a great way to test that any analysis you have written is working correctly, and to understand statistical methods more generally. In this post I’m going to show you an example of generating (simulating) data from a probabalistic model. A probabalistic model is one that is *stochastic* in that it doesn’t generate the same data set each time the simulation is run (unlike a deterministic model).

The following is actually the code that I used to generate the data in my post on importing data and plot here, so if you’ve been following this blog then you’re already familiar with the output of this post. This simulated data set consisted of 5 subjects, who were shown 20 trials at each combination of 7 contrasts and 5 spatial frequencies.

## The predictor variables

First we set up all the predictor variables (independent or experimental variables) we’re interested in.

### Variable vectors

(subjects <- paste0("S", 1:5)) # the double parentheses just serve to print the output

## [1] "S1" "S2" "S3" "S4" "S5"

As we’ve discussed, the `paste0`

command is for concatenating (sticking together) strings. The command above creates a character vector of five strings, starting with “S1″ and ending with “S5″.

(contrasts <- exp(seq(-6, -1, l = 7)))

## [1] 0.002479 0.005704 0.013124 0.030197 0.069483 0.159880 0.367879

This creates a vector of length 6 containing log-spaced contrast values… and so on for other variables:

(sfs <- exp(seq(log(0.5), log(40), l = 5)))

## [1] 0.500 1.495 4.472 13.375 40.000

target_sides <- c("left", "right") (n_trials <- 20)

## [1] 20

### Combining the variables

Now that we’ve set up the independent variables from the experiment, we want to create a data frame with one row for each combination of the variables for each subject. We can do this easily with R’s `expand.grid`

command.

dat <- expand.grid(subject = subjects, contrast = contrasts, sf = sfs, target_side = target_sides, trial = 1:n_trials) head(dat)

## subject contrast sf target_side trial ## 1 S1 0.002479 0.5 left 1 ## 2 S2 0.002479 0.5 left 1 ## 3 S3 0.002479 0.5 left 1 ## 4 S4 0.002479 0.5 left 1 ## 5 S5 0.002479 0.5 left 1 ## 6 S1 0.005704 0.5 left 1

You can see from the first few rows of `dat`

what `expand.grid`

has done: it has created a data frame (called `dat`

) that contains each combination of the variables entered. A call to `summary`

shows us how R has usefully made factors out of string variables (subject and target side):

summary(dat)

## subject contrast sf target_side trial ## S1:1400 Min. :0.0025 Min. : 0.50 left :3500 Min. : 1.00 ## S2:1400 1st Qu.:0.0057 1st Qu.: 1.50 right:3500 1st Qu.: 5.75 ## S3:1400 Median :0.0302 Median : 4.47 Median :10.50 ## S4:1400 Mean :0.0927 Mean :11.97 Mean :10.50 ## S5:1400 3rd Qu.:0.1599 3rd Qu.:13.37 3rd Qu.:15.25 ## Max. :0.3679 Max. :40.00 Max. :20.00

You can tell that something is a factor because instead of showing the summary statistics (mean, median) it just shows how many instances of each level there are.

To make the modelling a bit simpler, we’re also going to create a factor of spatial frequency:

dat$sf_factor <- factor(round(dat$sf, digits = 2)) summary(dat)

## subject contrast sf target_side trial ## S1:1400 Min. :0.0025 Min. : 0.50 left :3500 Min. : 1.00 ## S2:1400 1st Qu.:0.0057 1st Qu.: 1.50 right:3500 1st Qu.: 5.75 ## S3:1400 Median :0.0302 Median : 4.47 Median :10.50 ## S4:1400 Mean :0.0927 Mean :11.97 Mean :10.50 ## S5:1400 3rd Qu.:0.1599 3rd Qu.:13.37 3rd Qu.:15.25 ## Max. :0.3679 Max. :40.00 Max. :20.00 ## sf_factor ## 0.5 :1400 ## 1.5 :1400 ## 4.47 :1400 ## 13.37:1400 ## 40 :1400 ##

## The data model

Now that we have the data frame for the experimental variables, we want to think about how to generate a lawful (but probabalistic) relationship between the experimental variables and the outcome variable (in this case, getting a trial correct or incorrect). I am going to do this in the framework of a logistic Generalised Linear Model (GLM).

For this data set, let’s say that each subject’s chance of getting the trial correct is a function of the contrast and the spatial frequency on the trial. I’m going to treat contrast as a continuous predictor (aka a *covariate*) and spatial frequency as a discrete variable (a *factor*). I could treat both as continuous, but visual sensitivity as a function of spatial frequency is non-monotonic, so I thought this blog post would be simpler if I just treat my five levels as discrete. In the simple GLM I’m going to use, this means that each subject will have six coefficients: an intercept, the slope of contrast, and the offset for each level of spatial frequency.

### Design matrix

R is pretty amazing for doing stuff in a GLM framework. For this example, we can generate the design matrix for our data frame `dat`

using the `model.matrix`

function:

X <- model.matrix(~log(contrast) + sf_factor, data = dat) head(X)

## (Intercept) log(contrast) sf_factor1.5 sf_factor4.47 sf_factor13.37 ## 1 1 -6.000 0 0 0 ## 2 1 -6.000 0 0 0 ## 3 1 -6.000 0 0 0 ## 4 1 -6.000 0 0 0 ## 5 1 -6.000 0 0 0 ## 6 1 -5.167 0 0 0 ## sf_factor40 ## 1 0 ## 2 0 ## 3 0 ## 4 0 ## 5 0 ## 6 0

The formula `~ log(contrast) + sf_factor`

tells R that we want to predict something using contrast (a covariate) and additive terms of the factor `sf_factor`

. If we changed the `+`

to a `*`

this would give us all the interaction terms too (i.e. the slope would be allowed to vary with spatial frequency). Note how our first 6 rows have all zeros for the columns of `sf_factor`

: this is because all the first rows in `dat`

are from sf 0.5, which here is the reference level.

R has automatically dummy coded `sf_factor`

, dropping the reference level (this defaults to the first level of the factor). A trial with a spatial frequency of 0.5 will have zeros for all the `sf_factor`

columns of the design matrix. A trial with a spatial frequency of 1.5 will have ones in that column and zeros everywhere else.

### Coefficients (parameters)

Next we need our "true" coefficients for each subject. I will generate these as samples from a normal distribution (this is where the first *stochastic* part of our data generation comes in). But first, I will set R's random number seed so that you can produce the results described here (i.e. this will make the result deterministic). If you want stochasticity again, just comment out this line:

set.seed(424242)

Now the coefficients:

b0 <- rnorm(length(subjects), mean = 7, sd = 0.2) # intercept b1 <- rnorm(length(subjects), mean = 2, sd = 0.2) # slope of contrast b2 <- rnorm(length(subjects), mean = 2, sd = 0.2) # sf 1.5 b3 <- rnorm(length(subjects), mean = 1.5, sd = 0.2) # sf 4.4 b4 <- rnorm(length(subjects), mean = 0, sd = 0.2) # sf 13 b5 <- rnorm(length(subjects), mean = -2, sd = 0.2) # sf 40 print(betas <- matrix(c(b0, b1, b2, b3, b4, b5), nrow = 5))

## [,1] [,2] [,3] [,4] [,5] [,6] ## [1,] 7.088 2.445 2.101 1.553 -0.2687 -2.138 ## [2,] 7.194 1.974 2.279 1.722 0.1459 -2.243 ## [3,] 7.152 2.232 2.136 1.510 0.2344 -2.189 ## [4,] 7.034 2.070 2.242 1.391 0.1914 -1.790 ## [5,] 7.085 1.922 1.705 1.541 0.3116 -1.871

The above matrix has the subjects in the rows and the coefficients in the columns. That is, subject 2's "true" (i.e. generating) parameters are 7.194, 1.974, 2.279, etc.

## Generating predictions

To generate predictions in the linear space of the GLM, we take the product of the design matrix and the betas for each subject:

eta <- rep(NA, length = nrow(dat)) for (i in 1:length(subjects)) { # for each subject this_subj <- levels(dat$subject)[i] # which subject? subj_rows <- dat$subject == this_subj # which rows belong to this subject? # create a design matrix, and pull out the betas for this subject: this_X <- model.matrix(~log(contrast) + sf_factor, data = dat[subj_rows, ]) this_betas <- betas[i, ] # mult, stick into eta: eta[subj_rows] <- this_X %*% this_betas } # stick eta into dat: dat$eta <- eta

The multiplication of the matrix X and the (row vector) beta above is equivalent to, for each row, multiplying each number in the design matrix by the corresponding beta value, then summing all these products to produce one number per row. This number is the linear predictor (and if we were dealing with simple linear regression, we'd be done now).

For our application however, we want to turn the linear predictor `eta`

into a probability that ranges from 0.5 (chance performance on the task) to 1. To do this we're going to use one of the custom link functions in the `psyphy`

package for fitting psychophysical data in R.

library(psyphy) links <- mafc.weib(2) # creates a list of functions. dat$p <- links$linkinv(dat$eta) # run the linear predictor through the inverse link function to get a probability.

Test that parameters are in a decent range by plotting:

library(ggplot2) fig <- ggplot(dat, aes(x = contrast, y = p)) + geom_line() + facet_grid(sf_factor ~ subject) + coord_cartesian(ylim = c(0.45, 1.05)) + scale_x_log10() + scale_y_continuous(breaks = c(0.5, 0.75, 1)) + theme_minimal() fig

Finally, we want to generate a binary outcome (success or failure, usually denoted 1 and 0) with probability `p`

for each trial. We do this using R's `rbinom`

function, which generates random samples from the binomial distribution (but remember, since we set the seed above, you can reproduce my numbers):

dat$y <- rbinom(nrow(dat), 1, prob = dat$p) summary(dat)

## subject contrast sf target_side trial ## S1:1400 Min. :0.0025 Min. : 0.50 left :3500 Min. : 1.00 ## S2:1400 1st Qu.:0.0057 1st Qu.: 1.50 right:3500 1st Qu.: 5.75 ## S3:1400 Median :0.0302 Median : 4.47 Median :10.50 ## S4:1400 Mean :0.0927 Mean :11.97 Mean :10.50 ## S5:1400 3rd Qu.:0.1599 3rd Qu.:13.37 3rd Qu.:15.25 ## Max. :0.3679 Max. :40.00 Max. :20.00 ## sf_factor eta p y ## 0.5 :1400 Min. :-9.717 Min. :0.500 Min. :0.00 ## 1.5 :1400 1st Qu.:-2.925 1st Qu.:0.526 1st Qu.:1.00 ## 4.47 :1400 Median : 0.286 Median :0.868 Median :1.00 ## 13.37:1400 Mean : 0.004 Mean :0.776 Mean :0.77 ## 40 :1400 3rd Qu.: 3.240 3rd Qu.:1.000 3rd Qu.:1.00 ## Max. : 7.500 Max. :1.000 Max. :1.00

This function generates a vector of the same length as the number of rows in `dat`

, each vector made up of one binomial trial (i.e. a Bernoulli trial), with probability given by the vector `dat$p`

.

### Converting from "y" (correct / incorrect) to "response"

Finally, as a little extra flavour, I convert the binary correct / incorrect response above into a "left" or "right" response by the simulated subject on each trial. Note that you wouldn't normally do this, I'm just doing it for demo purposes for the data import post.

dat$response <- "left" dat$response[dat$target_side == "right" & dat$y == 1] <- "right" # correct on right dat$response[dat$target_side == "left" & dat$y == 0] <- "right" # wrong on left dat$response <- factor(dat$response)

## Saving the data

Now I'm going to save a subset of the data here to a series of .csv files. These are the files we imported in my post on importing data. First, to make the data set look a little more realistic I'm going to shuffle the row order (as if I randomly interleaved trials in an experiment).

new_rows <- sample.int(nrow(dat)) dat <- dat[new_rows, ]

### Create a unique id for each trial

I'm going to add a unique identifier (UUID) to each trial. **This is a good thing to do in your experiment script**. It makes it easier to, for example, check that some data set stored in a different table (e.g. eye tracking or brain imaging data) synchs up with the correct psychophysical trial. You could also do something where you add the exact time stamp to each trial.

library(uuid) ids <- rep(NA, length = nrow(dat)) for (i in 1:length(ids)) ids[i] <- UUIDgenerate() dat$unique_id <- ids

### Writing to a file

To do this I'm going to use the `paste0`

command, which you should already be familiar with from the data import post. This allows you to stick strings (text) together.

On the line with `paste0`

, the first part gives us the project working directory, then we go into the `data`

directory and finally we create the filename from the subject name.

for (i in 1:length(subjects)) { # for each subject this_subj <- levels(dat$subject)[i] # which subject? # use the subset command to subset the full data frame and remove some # variables: this_dat <- subset(dat, subject == this_subj, select = c(-trial:-y)) output_file <- paste0(getwd(), "/data/data_", this_subj, ".csv") write.table(this_dat, file = output_file, row.names = FALSE, sep = ",") }

## Summing up

That's one way to generate data from an underlying probability model. I've used it to generate some data to import and plot, but this is a really great thing to know how to do. It's useful for testing your analysis code (if you don't get back the parameters you put in, you know something is wrong). I also found I learned a lot about statistics more generally by simulating various tests and models.

This post concludes the more "practical" content I've planned so far. In the next few posts I'm planning to talk about some more general things. If you have requests for future "practical" posts, let me know in the comments below.

# Graphically exploring data using ggplot2

Your second step after importing should always be to *look at the data*. That means plotting lots of things, and getting a sense of how everything fits together. *Never* run a statistical test until you’ve looked at your data in as many ways as you can. Doing so can give you good intuitions about whether the comparisons you planned make sense to do, and whether any unexpected relationships are apparent in the data. The best tool for reproducible data exploration that I have used is Hadley Wickham’s `ggplot2`

package.

## A brief introduction to the mindset of ggplot

The first thing to note about `ggplot2`

is that it is better thought of as a data exploration tool than a plotting package (like base graphics in Matlab, R, or Python). In those systems, you typically create an x variable and a y variable, then plot them as something like a line, then maybe add a new line in a different colour. `ggplot`

tries to separate your data from how you display it by making links between the data and the visual representations explicit, including any transformations. There’s a little introduction to this philosophy here.

For me, what this means in practice is that you need to start thinking in terms of *long format data frames* rather than separate x- and y vectors. A long format data frame is one where each value that we want on the y-axis in our plot is in a separate row (see wiki article here). The Cookbook for R has some good recipes for going between wide and long format data frames here. For example, imagine we have measured something (say, reaction time) in a within-subjects design where the same people performed the task under two conditions. For ggplot we want a data frame with columns like:

subject | condition | rt |
---|---|---|

s1 | A | 373 |

s1 | B | 416 |

s2 | A | 360 |

s2 | B | 387 |

*not* like:

subject | rt condition A | rt condition B |
---|---|---|

s1 | 373 | 416 |

s2 | 360 | 387 |

and *not* like (familiar to anyone plotting with Matlab or Matplotlib):

x = [s1, s2] y_1 = [373, 360] y_2 = [416, 387]

For the first few weeks of using ggplot2 I found this way of thinking about data took some getting used to, particularly when trying to do things as I’d done in Matlab. However, once you make the mental flip, the ggplot universe will open up to you.

## Contrast detection data example

Now we will look at the data from my data import post. This consists of data from a psychophysical experiment where five subjects detected sine wave gratings at different contrasts and spatial frequencies. You can download the data from my github repository here. For each trial, we have a binary response (grating left or right) which is either correct or incorrect. Each row in the data frame is a trial, which means that this is already in *long format*:

load(paste0(getwd(),'/out/contrast_data.RData')) head(dat)

## subject contrast sf target_side response ## 1 S1 0.069483 0.500 right right ## 2 S1 0.013124 40.000 right left ## 3 S1 0.069483 4.472 left left ## 4 S1 0.069483 40.000 left right ## 5 S1 0.367879 13.375 left left ## 6 S1 0.002479 0.500 left right ## unique_id correct ## 1 544ee9ff-2569-4f38-b04e-7e4d0a0be4d2 1 ## 2 b27fe910-e3ba-48fb-b168-5afb1f115d8f 0 ## 3 72c9d6ce-0a90-4d4b-a199-03435c15291b 1 ## 4 48b5bbb2-e6ee-4848-b77e-839ed5320c01 0 ## 5 32a5cce4-3f8a-4e63-80c1-3fee3230d1bd 1 ## 6 47ebce53-9d5a-48de-936b-25d5105a0784 0

### Baby steps

Building a plot in `ggplot2`

starts with the `ggplot()`

function:

library(ggplot2) fig <- ggplot(data = dat, aes(x = contrast, y = correct))

This command creates `fig`

, which is a ggplot object, in our workspace. We’ve specified the data frame to use (`dat`

), and two “aesthetics” using the `aes()`

function. Aesthetics are how ggplot assigns variables in our data frame to things we want to plot. In this example we have specified that we want to plot `contrast`

on the x-axis and `correct`

on the y-axis.

We can try plotting this just by typing `fig`

into the command window:

fig

## Error: No layers in plot

but this returns an error because we haven’t specified how we want to display the data. We must add a `geom`

to the `fig`

object (note the iterative notation, where we overwrite the fig object with itself plus the new element):

fig <- fig + geom_point() fig

Now we get a plot of the data, with each correct trial as a point at `1`

and each incorrect trial as a point at `0`

. But that’s not very informative, because there’s a lot of overplotting — we’re really interested in how often the subjects get the trials correct at each contrast level. That is, we want to know the proportion of correct responses.

To do that we could create a new data frame where we compute the mean of all `correct`

values for each cell of our experiment (i.e. for each subject, at each level of contrast and spatial frequency). However, it’s also possible for `ggplot2`

to do that for us as we plot, using the `stat_summary`

command:

fig <- fig + stat_summary(fun.data = "mean_cl_boot", colour = "red") fig

The `mean_cl_boot`

command computes the means and bootstrapped 95% confidence intervals on the mean, for all the y-values falling into each unique x-value. These are shown as the red points in the above plot. Type `?stat_summary`

and look at the examples (or run `example(stat_summary)`

to get an idea of what you can do out-of-the-box with this command. It also allows you to define your own functions to summarise the y values for each value of x, so it’s incredibly flexible.

Since the contrast values in our experiment were sampled logarithmically, the values for all the small contrasts are all squished up to the left of the plot. Therefore, the last thing we might want to do with this basic plot is to log scale the x-axis:

fig <- fig + scale_x_log10() fig

Now we can see that the mean proportion correct starts from 0.5 for low contrasts (i.e. 50% correct, or chance performance on the task) and gradually rises up to near 100% correct in an S-shaped fashion.

### Facets and smooths

The goal of this experiment was to see whether and how human visual sensitivity to contrast changes depending on the spatial scale of the information (loosely, whether the pattern is *coarse* or *fine*). While the basic data representation makes sense (i.e. looking at proportion correct), the plot above is not very useful because it averages over all the different subjects and over the experimental variable we’re most interested in (spatial frequency). Thus it doesn’t tell us anything about the goal of the experiment.

Here’s where `ggplot2`

gets really cool. We can apply the same basic plot to each subset of the data we’re interested in, in one line of code, by using *faceting*. First, here’s the basic plot again, but in a more succinct form (note how I string together subfunctions using the `+`

sign across multiple lines):

fig <- ggplot(data = dat, aes(x = contrast, y = correct)) + stat_summary(fun.data = "mean_cl_boot") + scale_x_log10() + scale_y_continuous(limits = c(0, 1)) fig

Now we want to do the same thing, but look at the data for each subject and spatial frequency separately. The `facet_grid`

command allows us to lay out the data subsets on a grid. Since we want to compare how the performance shifts as a function of contrast *within* each subject, it makes sense to arrange the facets with subjects in each column and spatial frequencies in the rows. This is done by adding one element to the `fig`

object above:

fig <- fig + facet_grid(sf ~ subject) # specifies rows ~ columns of the facet_grid. fig

<a href="http://tomwallisblog.files.wordpress.com/2014/04/unnamed-chunk-8.png"><img src="http://tomwallisblog.files.wordpress.com/2014/04/unnamed-chunk-8.png" alt="unnamed-chunk-8" width="504" height="504" class="aligncenter size-full wp-image-289" /></a>

Now we get a replication of the basic x-y plot but for each subject and spatial frequency. The axes have been scaled identically by default so it's easy to see variation across the facets. If you follow down each column, you can see that performance as a function of contrast first improves and then gets worse again, relative to the first spatial frequency (0.5 cycles of the grating per degree of visual angle). To see this more clearly it will help to add trend lines, which again we can do in one line in ggplot2:

fig <- fig + stat_smooth(method = "glm", family = binomial()) fig

In this case I've used a Generalised Linear Model specifying that we have binomial data (this defaults to a logistic link function). The blue lines show the maximum likelihood model prediction and the grey shaded regions show the 95% confidence limits on these predictions. However, this model isn't taking account of the fact that we know performance will asymptote at 0.5 (because this is chance performance on the task), so the slopes look all wrong. A psychophysicist would now fit this model with a custom link function that asymptotes at 0.5. Such link functions are implemented in Ken Knoblauch's `psyphy`

package for R.

We could implement that within `ggplot2`

as well, but instead here I will use a Generalised Additive Model (GAM) to show more flexible fitting of splines. This could come in handy if you didn't have a good approximation for the functional form for your dataset (i.e. you have only vague expectations for what the relationship between x and y should be).

library(mgcv)

## Loading required package: nlme ## This is mgcv 1.7-28. For overview type 'help("mgcv-package")'.

fig <- fig + stat_smooth(method = "gam", family = binomial(), formula = y ~ s(x, bs = "cs", k = 3), colour = "red") fig

<a href="http://tomwallisblog.files.wordpress.com/2014/04/unnamed-chunk-10.png"><img src="http://tomwallisblog.files.wordpress.com/2014/04/unnamed-chunk-10.png" alt="unnamed-chunk-10" width="504" height="504" class="aligncenter size-full wp-image-291" /></a>

The above code fits a cubic spline ("cs") with three knots ("k=3"). Essentially this is a really flexible way to consider nonlinear univariate relationships (a bit like polynomial terms in normal GLMs). You can see that these data are probably overfit (i.e. the model is capturing noise in the data and is unlikely to generalise well to new data), and some of the confidence regions are pretty crazy (because the model is so flexible and the data are not well sampled for the observer's sensitivity) but that it gives a reasonable impression for the purposes of exploration. If you scan down the columns for each subject, you can see that the point on the x-axis where the red curves have the steepest slope is furthest to the left for spatial frequencies of 1.5 and 4.5 cycles per degree. These observers reach a higher level of performance with less contrast than in the other conditions: their thresholds are lower. Humans are most sensitive to contrast in the 1–4 cycles per degree range (Campbell and Robson, 1968).

### Appearance matters

Finally, we can adjust the appearance of our plot. First, let's get rid of the ugly decimal-point labels in the spatial frequency dimension by creating a new variable, a factor of sf:

dat$sf_factor <- factor(dat$sf) levels(dat$sf_factor) <- round(sort(unique(dat$sf)), digits = 1) # rename levels

Second, some people don't like the grey background theme of the ggplot default. It took me a little while to get used to, but now I quite like it: by attending to things that are darker than the background you can concentrate on the data, but the gridlines are there if you need them. However, if you prefer a more traditional plot, just add the classic theme (`fig + theme_classic()`

). Personally my favourite is now `theme_minimal()`

. So having done all this, our entire plot call becomes:

fig <- ggplot(data = dat, aes(x = contrast, y = correct)) + facet_grid(sf_factor ~ subject) + stat_summary(fun.data = "mean_cl_boot") + stat_smooth(method = "gam", family = binomial(), formula = y ~ s(x, bs = "cs", k = 3)) + scale_x_log10(name = "Contrast") + scale_y_continuous(name ="Proportion Correct", limits=c(0, 1), breaks=c(0, 0.5, 1)) + theme_minimal() fig

## Going further with ggplot2

What would happen if we wanted to see whether performance changed depending on the side of the retina (left or right of fixation) the grating was presented? Perhaps sensitivity is different, or the person has a bias in responding to a side. We can look at this by simply adding a new argument in the `aes()`

function when we call `ggplot`

: `colour = target_side`

. Try it at home! There's an example of doing this in the `plots.R`

script on my github page. Here you can also see how the plots are saved to the `/figs`

subdirectory, where a document file (like a `.tex`

doc) can be set up to automatically pull in the figures. You can also see a nice vector graphic version of the final figure above.

This post was just a little taste of what you can do in ggplot2, with a focus on vision science data. There are many more thorough introductions for `ggplot2`

available on the web, like here and here. I find that The Cookbook for R has heaps of useful little tips and code snippets, as well as showing you lots of basic ggplot things in the plotting section. If you want an example of some published figures made with `ggplot2`

as well as the code that generated them, you can see our recent paper here.

# Data import: follow-up

This is a quick update post following up my data import post. I have put a script file into the `/funs/`

directory of my blog project that repeats the import and saving stuff I stepped through in that last post. You can find it on Github here. Feel free to fork that repository, but if you don’t want to deal with all the git and version control stuff you can just click the Download Zip button on the right to get all the files as a zip archive. The `data_import.R`

script can be sourced via RStudio or an R command prompt, and will reproduce the `contrast_data.RData`

file in the `/out/`

directory. For this to work, your working directory needs to be set to the project’s root directory; the easiest way to do this is by setting up a Project in RStudio located in the root directory. When you open your R project in RStudio, the working directory will automatically be set to the root.

I also wanted to point you to this article on using the “good parts” of R. It’s certainly true that some of R’s base syntax and functions are kind of horrible; using those add-on packages is really helpful. I learned some new things there too – like the use of `data.table`

.