Reproducibility policies promise "checkable" medical-imaging science, yet many submissions still ship unverifiable artifacts. An analysis of 3722 MICCAI papers shows code-linking rising from 51.8% to 72.5%, but ~13% of linked repositories are inaccessible or empty. We present paper-snitch, a reviewer-facing decision-support tool that turns these signals into an evidence-grounded report. paper-snitch parses PDFs, resolves and sanity-checks repositories, and applies policy-aware checklists aligned with MICCAI expectations, producing a review-time verifiability score decomposed into interpretable sub-scores plus criterion-linked excerpts and artifacts reviewers can inspect. It never executes untrusted code or attempts GPU-heavy reproduction, focusing instead on bounded, verifiable checks. We compare paper-snitch on 100 randomly sampled MICCAI 2025 papers with human annotators using shared evaluation criteria, indicating that automated, bounded checks can scale reproducibility screening while keeping final decisions with reviewers.

The Paper has a GitHub, the GitHub has a README, the README has Nothing: Reproducibility Signals for Review Support / Bolelli, Federico; Santoli, Davide; Marchesini, Kevin; Lumetti, Luca; Grana, Costantino. - (2026). ( 29th International Conference on Medical Image Computing and Computer Assisted Intervention Strasbourg, France Sep 27-Oct 1).

The Paper has a GitHub, the GitHub has a README, the README has Nothing: Reproducibility Signals for Review Support

Bolelli, Federico
;
Santoli, Davide;Marchesini, Kevin;Lumetti, Luca;Grana, Costantino
2026

Abstract

Reproducibility policies promise "checkable" medical-imaging science, yet many submissions still ship unverifiable artifacts. An analysis of 3722 MICCAI papers shows code-linking rising from 51.8% to 72.5%, but ~13% of linked repositories are inaccessible or empty. We present paper-snitch, a reviewer-facing decision-support tool that turns these signals into an evidence-grounded report. paper-snitch parses PDFs, resolves and sanity-checks repositories, and applies policy-aware checklists aligned with MICCAI expectations, producing a review-time verifiability score decomposed into interpretable sub-scores plus criterion-linked excerpts and artifacts reviewers can inspect. It never executes untrusted code or attempts GPU-heavy reproduction, focusing instead on bounded, verifiable checks. We compare paper-snitch on 100 randomly sampled MICCAI 2025 papers with human annotators using shared evaluation criteria, indicating that automated, bounded checks can scale reproducibility screening while keeping final decisions with reviewers.
2026
11-mag-2026
29th International Conference on Medical Image Computing and Computer Assisted Intervention
Strasbourg, France
Sep 27-Oct 1
Bolelli, Federico; Santoli, Davide; Marchesini, Kevin; Lumetti, Luca; Grana, Costantino
The Paper has a GitHub, the GitHub has a README, the README has Nothing: Reproducibility Signals for Review Support / Bolelli, Federico; Santoli, Davide; Marchesini, Kevin; Lumetti, Luca; Grana, Costantino. - (2026). ( 29th International Conference on Medical Image Computing and Computer Assisted Intervention Strasbourg, France Sep 27-Oct 1).
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1405368
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact