Vision journal community responses

In January 2016, the vision science mailing list CVNet hosted a long discussion about journal costings and community priorities for publishing our work. The discussion resulted in a survey, whose responses are here.

There were 380 responses to the questionnaire. Highlights from Alex Holcome‘s summary email include:

“Which financial/organizational aspect of journals should be the community‘s top priority?”and of the six options provided, the most popular answer was

“open access”, with 132 responses

“Full academic or professional society control” was 2nd with 78 responses

“Low cost” was 3rd, with 61 responses

Also:

“What should the academics on the editorial boards of overpriced journals (be they subscription or open access) do?”

“Work with the publisher to reform the journal itself” had 214 votes, followed by

“Wait until a majority or supermajority of editors agree to resign, and then resign en masse, with a plan agreed among the editors to join or start something else” with 90 votes

Vision journals were invited to respond to the survey results, in particular:

Perhaps the most salient question raised both by the survey responses and the CVnet discussion is exactly why each journal is as expensive/cheap as it is, particularly its open access option, and whether each journal will provide transparent accounting of costs. Given that the data indicate that “Full academic or professional society control” is a high priority, editors should also comment on the ability of themselves and the rest of us to affect their journal’s policies, features and cost.

Journals that have responded so far (last update: Nov 2016)

  • Perception / iPerception wrote a comprehensive response (copied from CVNet here).
  • Frontiers responded here.
  • ARVO (publisher of Journal of Vision and IOVS) responded (copied from CVNet here). They also voted to remove the USD 500 fee  for Gold (CC-BY) open access on June 2, 2016 (yay!).
  • Psychonomic Society (publisher of APP and Psych Bull & Rev) responded (copied from CVNet here).

Journals that were invited to respond but have not

  • Vision Research (Elsevier)
  • Journal of Experimental Psychology: Human Perception and Performance (APA)
  • Multisensory Research
  • “Vision” (MDPI)

 

Please email me if I am missing a response.

 

Withholding review until response

Because some journals have been so lax to respond to the community, I took the following action today:

Dear Prof XY,

As you’re aware, in January 2016 CVNet hosted a long discussion about open-access charges and journal costings more generally. This discussion resulted in a survey of the community (results here: https://docs.google.com/…/1tfpSVeLflOG4moGvhHlT2SivnW…/edit…). All journals publishing vision-related content were invited to respond to the survey, particularly addressing “exactly why each journal is as expensive/cheap as it is, particularly its open access option, and whether each journal will provide transparent accounting of costs. Given that the data indicate that “Full academic or professional society control” is a high priority, editors should also comment on the ability of themselves and the rest of us to affect their journal’s policies, features and cost.”.

To my knowledge, Vision Research has as yet failed to respond to this survey, despite having agreed to such a response at its editorial board meeting at VSS in May. This is in contrast to some other journals and publishers, such as Perception / iPerception and ARVO. If this understanding is mistaken, please let me know and I will correct my stance.

Failing that, I therefore choose to withhold my services as a reviewer until such time as Vision Research / Elsevier engage with the community they supposedly serve.

Best regards

Tom Wallis

If you’re a member of this community, perhaps you’ll consider the same response.

 

 

 

Open peer review!

Our manuscript “Detecting distortions of peripherally-presented letter stimuli under crowded conditions” (see here) has received an open peer review from Will Harrison. Thanks for your comments Will! They will be valuable in improving the manuscript in a future revision.

Hurrah for open science!*

* as suggested here, from now on “Open Science” will just be called “Science”, and everything else will be called “Closed Science”.

Quick link: QuestIntuition

My friend Daniel Saunders recently released a neat little graphical toolbox for Matlab called QuestIntuition that allows you to play around with the QUEST procedure for adaptive sampling of psychometric thresholds. This will be a good resource for students learning how QUEST works.

Some questions :

  1. What happens if the initial guess is way off? How many trials does QUEST need to recover?
  2. What happens if the assumed slope is way off?
  3. What happens if the upper asymptote of the psychometric function is lower than the threshold QUEST is trying to find (I saw this in a paper I reviewed once)? Does it still produce reasonable samples?

Check it out!

 

New paper: testing models of peripheral encoding using appearance matching

We only perceive a small fraction of the information that enters our eyes. What information is represented and what is discarded? Freeman and Simoncelli (2011) introduced a neat method of psychophysically testing image-based models of visual processing. If images that produce identical model responses also appear identical to human observers, it implies that the model is only discarding information that does not matter for perception (and conversely, retaining image structure that matters). The images are metamers: physically different images that appear the same (the term originates in the study of colour vision).

Our latest paper extends this approach and sets a higher bar for model testing. In the original study, Freeman and Simoncelli synthesised images from a model of peripheral visual processing, and showed that observers could not tell two synthesised images apart from each other at an appropriate level of information loss (in this case, the scaling of pooling regions spanning into the visual periphery). However, observers in these experiments never compared the model images to the original (unmodified) images. If we’re interested in the appearance of natural scenes, this is not a sufficient test. To take one extreme, a “blind” model that discarded all visible information would produce images that were indiscriminable from each other, but would no doubt fail to match the appearance of a natural source image.

We extend this approach by having observers compare model-compressed images to the original image. If models are good candidates for the kind of information preserved in the periphery, they should succeed in matching the appearance of the original scenes (that is, the information they preserve is sufficient to match appearance).

We apply this logic to two models of peripheral appearance: one in which high spatial frequency information (fine detail) is lost in the periphery (simulated using Gaussian blur), and another in which image structure in the periphery is assumed to be “texturised”. We had observers discriminate images using a three-alternative temporal oddity task. Three image patches are presented consecutively; two are identical to each other, and one is different. The “oddball” could be either the original or the modified image. The observer indicates whether image 1, 2 or 3 was different to the other two. If the images appear identical, the observer will achieve 33% correct, on average.

Our results show that neither a blur model nor a texture model are particularly good at matching peripheral appearance. Human observers were more sensitive to natural appearance than might be expected from either of these models, implying that richer representations than the ones we examined will be required to match the appearance of natural scenes in the periphery. That is, the models discard too much information.

Finally, we note that appearance matching alone is not sufficient. A model that discards no information would match appearance perfectly. We instead seek the most compressed (parsimonious) model that also matches appearance. Therefore, the psychophysical approach we outline here must ultimately be paired with information-theoretic model comparison techniques to adjudicate between multiple models that successfully match appearance.

You can read our paper here, and the code, data and materials are also available (you can also find the code on Github).

Wallis, T. S. A., Bethge, M., & Wichmann, F. A. (2016). Testing models of peripheral encoding using metamerism in an oddity paradigm. Journal of Vision, 16(2), 4.

Freeman, J., & Simoncelli, E. P. (2011). Metamers of the ventral stream. Nature Neuroscience, 14(9), 1195–1201.