[Insight-developers] REPRODUCIBILITY: IEEE CVPR 2010, Moved to the Bright Side !!!

Torsten Rohlfing torsten at synapse.sri.com
Sat Nov 21 15:30:06 EST 2009


Hi Luis --

I need to disagree on some of your points, unfortunately.

However, I do not disagree at all that this is a move in the right 
direction, and I am thrilled to see it from a conference as strong as CVPR.

Now for my disagreements: the review will not actually become trivial 
just because data and source code are provided. Neither answers the 
questions, how significant and original the research is, and these are 
quite important review criteria. All that data and code help us with is 
a) to ensure that the current presumption of reproducibility is actually 
justified, and b) build on others' research without having to re-code 
all their code first.

Now there is one final problem here as far as the impact of data and 
code availability on the CVPR reviews is concerned: they are not 
actually available during the review phase. The author only states in 
the paper if and how code and data will be released AFTER the paper has 
been accepted. So we still have to assume as reviewers that the code 
does indeed function as described in the paper, and we furthermore have 
to trust that the authors will indeed make good on their promise after 
paper acceptance.

I can't say that's unreasonable, though, because just like we shouldn't 
necessarily blindly trust authors, we should certainly also not blindly 
trust reviewers (after all, they are basically the same people). So if 
the code were available during review, reviewers might be tempted to 
take it for their own work, yet reject the paper, maybe even 
intentionally to gain an advantage.

Anyway, bottom line is, the CVPR review isn't really affected much by 
the new criterion, but it would be nice indeed if the conference 
implemented a reward for releasing code and data, maybe by adding a 
certain bonus to the reviewer scores.

Best,
   Torsten

> Hi David,
>
> I agree, it will be fascinating to see how this reshapes the field.
>
> Regarding your concern, I would argue that if the paper is "really"
> reproducible, then the review should become trivial.
>
> It should come down to a one-hour exercise of:
>
> 1) Download the data
> 2) Download the software (and potentially build it).
> 3) Download the scripts with parameters that run the software
> 4) Going for lunch
> 5) Coming back and comparing the results with the paper.
>
> There shouldn't be ANY thing left for the reviewer (or the reader)
> to guess, or to figure out. The instructions should be quite explicit.
>
> All figures in the paper must be regenerable by doing "make"
> on the  materials downloaded from (1,2,3).
>
> On the other hand, preparing the paper will become more involved,
> but, again at the benefit of the practices on the field.
>
> Even for the authors themselves, it will be great to have under
> CVS an entire structure of the paper, that they can download
> and rerun in a matter of minutes to hours.
>
> What we all will learn is that, reproducibility leads to a full
> set of good practices.
>
>
>       Luis
>
>    

-- 
Torsten Rohlfing, PhD          SRI International, Neuroscience Program
  Senior Research Scientist      333 Ravenswood Ave, Menlo Park, CA 94025
   Phone: ++1 (650) 859-3379      Fax: ++1 (650) 859-2743
    torsten at synapse.sri.com        http://www.stanford.edu/~rohlfing/

      "Though this be madness, yet there is a method in't"

-------------- next part --------------
A non-text attachment was scrubbed...
Name: torsten.vcf
Type: text/x-vcard
Size: 373 bytes
Desc: not available
URL: <http://www.itk.org/mailman/private/insight-developers/attachments/20091121/41272796/attachment.vcf>


More information about the Insight-developers mailing list