Author: tsawallis

New paper: testing models of peripheral encoding using appearance matching

We only perceive a small fraction of the information that enters our eyes. What information is represented and what is discarded? Freeman and Simoncelli (2011) introduced a neat method of psychophysically testing image-based models of visual processing. If images that produce identical model responses also appear identical to human observers, it implies that the model is only discarding information that does not matter for perception (and conversely, retaining image structure that matters). The images are metamers: physically different images that appear the same (the term originates in the study of colour vision).

Our latest paper extends this approach and sets a higher bar for model testing. In the original study, Freeman and Simoncelli synthesised images from a model of peripheral visual processing, and showed that observers could not tell two synthesised images apart from each other at an appropriate level of information loss (in this case, the scaling of pooling regions spanning into the visual periphery). However, observers in these experiments never compared the model images to the original (unmodified) images. If we’re interested in the appearance of natural scenes, this is not a sufficient test. To take one extreme, a “blind” model that discarded all visible information would produce images that were indiscriminable from each other, but would no doubt fail to match the appearance of a natural source image.

We extend this approach by having observers compare model-compressed images to the original image. If models are good candidates for the kind of information preserved in the periphery, they should succeed in matching the appearance of the original scenes (that is, the information they preserve is sufficient to match appearance).

We apply this logic to two models of peripheral appearance: one in which high spatial frequency information (fine detail) is lost in the periphery (simulated using Gaussian blur), and another in which image structure in the periphery is assumed to be “texturised”. We had observers discriminate images using a three-alternative temporal oddity task. Three image patches are presented consecutively; two are identical to each other, and one is different. The “oddball” could be either the original or the modified image. The observer indicates whether image 1, 2 or 3 was different to the other two. If the images appear identical, the observer will achieve 33% correct, on average.

Our results show that neither a blur model nor a texture model are particularly good at matching peripheral appearance. Human observers were more sensitive to natural appearance than might be expected from either of these models, implying that richer representations than the ones we examined will be required to match the appearance of natural scenes in the periphery. That is, the models discard too much information.

Finally, we note that appearance matching alone is not sufficient. A model that discards no information would match appearance perfectly. We instead seek the most compressed (parsimonious) model that also matches appearance. Therefore, the psychophysical approach we outline here must ultimately be paired with information-theoretic model comparison techniques to adjudicate between multiple models that successfully match appearance.

You can read our paper here, and the code, data and materials are also available (you can also find the code on Github).

Wallis, T. S. A., Bethge, M., & Wichmann, F. A. (2016). Testing models of peripheral encoding using metamerism in an oddity paradigm. Journal of Vision, 16(2), 4.

Freeman, J., & Simoncelli, E. P. (2011). Metamers of the ventral stream. Nature Neuroscience, 14(9), 1195–1201.

Advertisements

PsyUtils: my utility functions package

You probably have a bunch of functions that you use often in your workflow. You’ve probably put these into a common place that’s easy for you to find. Mine is a Python package I uncreatively called PsyUtils. Whenever I find I want to use a function across multiple projects, I try to put it here. This is a constant work in progress, and isn’t supported for use outside my students and I, but maybe you find something useful in it (or a bug; please do report it).

For example, it can give you a fixation cross for encouraging steady fixation (Thaler et al), a bunch of filters, (modified from Peter Bex’s matlab code), lets you do nifty things with Gabor filterbanks, and I most recently added some functions to work with psychophysical (bernoulli trial) data, that includes some handy wrappers for faceting psychometric function fits using Seaborn. I find this really useful to quickly explore experimental data, and maybe you will too.

Note again, that I do not support this package. I might break backwards compatibility any time (I’ll try to use semantic versioning appropriately), and many of the functions are not well tested. Enjoy!

Quick link: rstanarm and brms

I haven’t used R extensively for some years now, having switched to mainly using Python. With the recent release of rstanarm and brms, it looks like R is going to jump back into my software rotation for the foreseeable future.

Basically, these packages make some of the stuff I was doing by hand in Stan (Bayesian inference for Generalised Linear Mixed Models, GLMMs) a total breeze. Get on it!

New paper: Unifying saliency metrics

 

How well can we predict where people will look in an image? A large variety of models have been proposed that try to predict where people look using only the information provided by the image itself. The MIT Saliency Benchmark, for example, compares 47 models.

So which model is doing the best? Well, it depends which metric you use to compare them. That particular benchmark lists 7 metrics; ordering by a new one changes the model rankings. That’s a bit confusing. Ideally we would find a metric that unambiguously tells us what we want to know.

Over a year ago, I wrote this blog post telling you about our preprint How close are we to understanding image-based saliency?. After reflecting on the work, we realised that the most useful contribution of that paper was buried in the appendix (Figure 10). Specifically, putting the models onto a common probabalistic scale makes all the metrics* agree. It also allows the model performance per se to be separated from nuisance factors like centre bias and spatial precision, and for model predictions to be evaluated within individual images.

We re-wrote the paper to highlight this contribution, and it’s now available hereThe code is available here.

Citation:

Kümmerer, M., Wallis, T.S.A. and Bethge, M. (2015). Information-theoretic model comparison unifies saliency metrics. Proceedings of the National Academy of Sciences.

*all the metrics we evaluated, at least.

NOTE: I don’t endorse the ads below. I’d have to pay wordpress to remove them.

 

 

Quick link: Slow science

Deborah Apthorp pointed me to this excellent post by Uta Frith. It reflects a lot of my own feelings about how science could be improved (though I haven’t worked out how to balance these ideas with my personal need to get a permanent job in a fast science environment).

It reminds me of an idea that I heard from Felix Wichmann (not sure on origin) that every scientist would only be allowed to publish some number (e.g. 20) papers in their entire career. This would create pretty strong incentives not to “salami slice”, and to only publish results that you felt were deeply insightful. Of course, how to evaluate people for tenure / promotion in such a system is unclear.