You probably have a bunch of functions that you use often in your workflow. You’ve probably put these into a common place that’s easy for you to find. Mine is a Python package I uncreatively called PsyUtils. Whenever I find I want to use a function across multiple projects, I try to put it here. This is a constant work in progress, and isn’t supported for use outside my students and I, but maybe you find something useful in it (or a bug; please do report it).
For example, it can give you a fixation cross for encouraging steady fixation (Thaler et al), a bunch of filters, (modified from Peter Bex’s matlab code), lets you do nifty things with Gabor filterbanks, and I most recently added some functions to work with psychophysical (bernoulli trial) data, that includes some handy wrappers for faceting psychometric function fits using Seaborn. I find this really useful to quickly explore experimental data, and maybe you will too.
Note again, that I do not support this package. I might break backwards compatibility any time (I’ll try to use semantic versioning appropriately), and many of the functions are not well tested. Enjoy!
I haven’t used R extensively for some years now, having switched to mainly using Python. With the recent release of
brms, it looks like R is going to jump back into my software rotation for the foreseeable future.
Basically, these packages make some of the stuff I was doing by hand in Stan (Bayesian inference for Generalised Linear Mixed Models, GLMMs) a total breeze. Get on it!
How well can we predict where people will look in an image? A large variety of models have been proposed that try to predict where people look using only the information provided by the image itself. The MIT Saliency Benchmark, for example, compares 47 models.
So which model is doing the best? Well, it depends which metric you use to compare them. That particular benchmark lists 7 metrics; ordering by a new one changes the model rankings. That’s a bit confusing. Ideally we would find a metric that unambiguously tells us what we want to know.
Over a year ago, I wrote this blog post telling you about our preprint How close are we to understanding image-based saliency?. After reflecting on the work, we realised that the most useful contribution of that paper was buried in the appendix (Figure 10). Specifically, putting the models onto a common probabalistic scale makes all the metrics* agree. It also allows the model performance per se to be separated from nuisance factors like centre bias and spatial precision, and for model predictions to be evaluated within individual images.
We re-wrote the paper to highlight this contribution, and it’s now available here. The code is available here.
Kümmerer, M., Wallis, T.S.A. and Bethge, M. (2015). Information-theoretic model comparison unifies saliency metrics. Proceedings of the National Academy of Sciences.
*all the metrics we evaluated, at least.
NOTE: I don’t endorse the ads below. I’d have to pay wordpress to remove them.
Deborah Apthorp pointed me to this excellent post by Uta Frith. It reflects a lot of my own feelings about how science could be improved (though I haven’t worked out how to balance these ideas with my personal need to get a permanent job in a fast science environment).
It reminds me of an idea that I heard from Felix Wichmann (not sure on origin) that every scientist would only be allowed to publish some number (e.g. 20) papers in their entire career. This would create pretty strong incentives not to “salami slice”, and to only publish results that you felt were deeply insightful. Of course, how to evaluate people for tenure / promotion in such a system is unclear.
I came across this blog on how to give good scientific talks. I’m up to the second post and I agree with almost all of it so far. Eliminating words on slides and using presenter notes is something I’ve recently started to do. It takes some more practice but I feel like people are indeed more engaged in the talk.