Python

Exploring GLMs with Python

A few months ago Philipp Berens and I ran a six week course introducing the Generalised Linear Model to graduate students. We decided do run the course in Python and to use the IPython Notebook to present the lectures. You can view the lecture notes here and get the source code here.

The six lectures were:

  1. Introduction by us, then student group presentations of a points of significance article that we hoped would provide a refresher on basic statistics.
  2. Finishing off student presentations (which took too long in the first week), followed by an introduction to Python, dataframes and exploratory plotting.
  3. Design matrices
  4. Interactions
  5. ANOVA and link functions
  6. Current issues in statistical practice, in which we touched on things like exploratory vs confirmatory data analysis, p-hacking, and the idea that statistics is not simply a cookbook one can mechanically follow.

What worked

  • In general the feedback suggests that most students felt they benefitted from the course. Many reported that they enjoyed using the notebook and working through the course materials with a few exercises themselves.
  • The main goal of the course was to give students a broad understanding of how GLMs work and how many statistical procedures can be thought of as special cases of the GLM. Our teaching evaluations suggest that many people felt they achieved this.
  • The notebooks allow tying the theory and equations very concretely to the computations one performs to do the analysis. I think many students found this helpful, particularly in working through the design matrices

What didn’t

  • Many students felt the course was too short for the material that we wanted to cover. I agree.
  • Some students found the lectures (for which weeks 2–5 involved me working through notebooks and presenting extra material) boring.
  • Despite the niceness of the Anaconda distribution, Python is unfortunately still not as easy to set up as (for example) MATLAB across different environments (primarily Windows, some OSX). We spent more time than we would have liked troubleshooting issues (mainly to do with different Windows versions not playing nicely with other things).
  • We didn’t spend a lot of time discussing the homework assignments in class.
  • Some students (graduate students in the neural behavioural school) are more familiar with MATLAB and strongly preferred that we teach the course using that.

If I were to run the course again

I think the main thing I would change if I ran the course again would be to have it run longer. As a “taste test” I think six weeks was fine, but to really get into the materials would require more lectures.

I also think it would be beneficial to split the content into lectures and practicals. I think the IPython notebooks in the way I used them would be excellent for teaching practical / small class tutorials, but that the lectures probably benefit from higher-level overviews and less scrolling over code*.

I would also plan the homework projects better. One student suggested that rather than having the introduction presentations at the beginning of the course, it would be nice to have the last lecture dedicated to students running through their analysis of a particular dataset in the notebook. I think that’s a great idea.

Finally, I would ignore requests to do the course in MATLAB. Part of why we wanted to use Python or R to do the course was to equip students with tools that they could continue using (for free) if they leave the university environment. Perhaps this is more valuable to undergraduates than PhD students (who already have some MATLAB experience), but I think it’s good for those students to get exposed to free alternatives as well. Plus, the IPython notebooks are just, well, pretty boss.

*I know you can hide cells using the slide mode, but I found slide mode in general quite clunky and so I opted not to use it.

Advertisements

High-level plotting in Python

If you have read my older blog posts, you know I’m a fan of R’s ggplot2 library for exploratory data analysis. It allows you to examine many views onto data, creating summaries over different variables right there as you plot. I gave a short tutorial here. Once you understand how the library works, it’s a very powerful way of quickly seeing patterns in large datasets.

Moving over to Python

The past few months have seen me ramp up my Python useage (as I preempted almost a year ago). This is for a variety of reasons, but mostly (a) Python is a general programming language, so you can essentially do anything in it without it being a massive hack; (b) the scientific computing packages available for Python are generally super awesome; (c) it seems to me to be poised to replace Matlab as the lingua franca of neuroscience / vision research — at least, much more so than R. See Tal Yarkoni’s blog for a related discussion. Of course, it’s also free and open source.

What I’m still kind of missing in Python is a nice high-level plotting library like ggplot2. Certainly, there are many in the works, but in playing around I’ve found that none of them can match R’s ggplot2 yet.

Amphibious assault

Of the libraries I’ve tried (main competitor is yhat’s ggplot port), my go-to library for plotting in Python is hands-down Seaborn. Developed largely by Michael Waskom, a graduate student in Cog Neuro at Stanford, it has the prettiest plots, and also the best ideas for high-level functionality.

Speeding up Seaborn?

Unfortunately, compared to R’s ggplot2, Seaborn data summary functions are slow. As an example, I’ll show you a summary plot using the simulated data from my R tutorials.

We want to summarise our (simulated) subjects’ detection performance as a function of stimulus contrast, for each spatial frequency. In R, my code looks like this:

    library(ggplot2)
    setwd('~/Dropbox/Notebooks/')
    dat <- read.csv('contrast_data.csv')
    summary(dat)

    plot_fun <- function(){
      fig <- ggplot(data = dat, aes(x=contrast, y=correct, colour=factor(sf))) +
        stat_summary(fun.data = "mean_cl_boot") + 
        scale_x_log10() + scale_y_continuous(limits = c(0,1)) + 
        facet_wrap( ~ subject, ncol=3)  +  
        stat_smooth(method = "glm", family=binomial())
      print(fig)
    }

    system.time(plot_fun())

This takes about 3 seconds (2.67) to return the following plot:

<a href="https://tomwallisblog.files.wordpress.com/2014/10/ggplot_timing.png"><img src="https://tomwallisblog.files.wordpress.com/2014/10/ggplot_timing.png&quot; alt="ggplot_timing" width="600" height="485" class="aligncenter size-full wp-image-374" /></a>

EDIT: for some reason WordPress is being silly and won't let me actually show this picture. Suffice to say it looks substantively similar to the plot below

Let's contrast this to Seaborn. First the code:

# do imports
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import ggplot

dat = pd.read_csv('contrast_data.csv')

# make a log contrast variable (because Seaborn can't auto-log an axis like ggplot):
dat['log_contrast'] = np.log(dat['contrast'])

# set some styles we like:
sns.set_style("white")
sns.set_style("ticks")
pal = sns.cubehelix_palette(5, start=2, rot=0, dark=0.1, light=.8, reverse=True)

%%timeit
fig = sns.lmplot("log_contrast", "correct", dat,
x_estimator=np.mean,
col="subject",
hue='sf',
col_wrap=3,
logistic=True,
palette=pal,
ci=95);

(note that the above lines were run in an ipython notebook, hence the %%timeit magic operator).

Seaborn takes… wait for it… 2 minutes and 11 seconds to produce the following plot:

seaborn_2min

(I’m using Python 3.3.5, Anaconda distribution 2.1.0, Seaborn 0.4.0.)

Note also that both packages are doing 1000 bootstraps by default (as far as I’m aware), so I’m pretty sure they’re doing equivalent things.

What can be done?

This is obviously not a positive factor for my switch from R to Python, and I’m hoping it’s just that I’ve done something wrong. However, another explanation is that whatever Seaborn is using to do the bootstrapping or logistic model fitting is just far less optimised than ggplot2’s backend in R.

The nice thing about open source software is that we can help to make this better. So if you’re a code guru who’s reading this and wants to contribute to the scientific computing world moving ever faster to Python, go fork the github repo now!

Update

After I posted this, I opened an issue on Github asking the developers about the slow times. Turns out that the ci flag in sns.lmplot specifies confidence intervals for the logistic regression, which is also bootstrapped. Bootstrapping a logistic regression takes a while; setting ci=False means that Seaborn now takes about 7 seconds to produce that plot instead of 2 minutes.

So, hooray for Seaborn and for awesome open source developers!