eye movements

New paper: Unifying saliency metrics

 

How well can we predict where people will look in an image? A large variety of models have been proposed that try to predict where people look using only the information provided by the image itself. The MIT Saliency Benchmark, for example, compares 47 models.

So which model is doing the best? Well, it depends which metric you use to compare them. That particular benchmark lists 7 metrics; ordering by a new one changes the model rankings. That’s a bit confusing. Ideally we would find a metric that unambiguously tells us what we want to know.

Over a year ago, I wrote this blog post telling you about our preprint How close are we to understanding image-based saliency?. After reflecting on the work, we realised that the most useful contribution of that paper was buried in the appendix (Figure 10). Specifically, putting the models onto a common probabalistic scale makes all the metrics* agree. It also allows the model performance per se to be separated from nuisance factors like centre bias and spatial precision, and for model predictions to be evaluated within individual images.

We re-wrote the paper to highlight this contribution, and it’s now available hereThe code is available here.

Citation:

Kümmerer, M., Wallis, T.S.A. and Bethge, M. (2015). Information-theoretic model comparison unifies saliency metrics. Proceedings of the National Academy of Sciences.

*all the metrics we evaluated, at least.

NOTE: I don’t endorse the ads below. I’d have to pay wordpress to remove them.

 

 

Advertisements

How close are we to understanding image-based saliency?

How well can we predict where people look in stationary natural images? While the scope of this question addresses only a fraction of what it means to understand eye movements in natural environments, it nevertheless remains a starting point to study this complex topic. It’s also of great interest to both cognitive scientists and computer vision researchers since it has applications from advertising to robotics.

Matthias Kümmerer has come up with a statistical framework that answers this question in a principled way. Building on the nice work of Simon Barthelmé and colleagues, Matthias has shown how saliency models can be compared in units of information (i.e. using log-likelihoods). Since information provides a meaningful linear metric, it allows us to compare the distance between model predictions, a baseline (capturing image-independent spatial biases in fixations) and the gold standard (i.e. how well you could possibly do, knowing only the image).

So how close are we to understanding image-based saliency? Turns out, not very. The best model we tested (a state-of-the-art model from 2014 by Vig, Dorr and Cox) explained about one third of the possible information gain between the baseline and the gold standard in the dataset we used. If you want to predict where people look in stationary images, there’s still a long way to go.

In addition, our paper introduces methods to show, in individual images, where and by how much a model fails (see the image above). We think this is going to be really useful for people who are developing saliency models. Finally, we extend the approach to the temporal domain, showing that knowing about both spatial and temporal biases, but nothing about the image, gives you a better prediction than the best saliency model using only spatial information.

The nice thing about this last point is that it shows that Matthias’ method is very general. If you don’t think that measuring where people look in stationary images tells you much about eye movements in the real world, that’s fine. You can still use the method to quantify and compare data and models in your exciting new experiment.

A paper that goes into much more detail than this blog post is now available on arXiv. In particular, saliency experts should check out the appendices, where we think we’ve resolved some of the reasons why the saliency model comparison literature was so muddled.

We’re close to submitting it, so we’d love to hear feedback on the pitch and nuance of our story, or anything that may not be clear in the paper. You can send me an email to pass on your thoughts.

When the paper is out we will also be making a software framework available for model comparison and evaluation. We hope the community will find these to be useful tools.


As usual, everything that appears below this line are ads and not endorsed by me. I don’t make money from this site; I would have to pay WordPress to remove the ads.