Challenge details


Timeline

  • 05/04: Challenge announcement & registrations open

  • To be announced: Submission opens

  • 10/06: Submission closes

  • 10/06 - 30/06: Final check of validity of results and announcement of winners


Format

This challenge focuses on unsupervised denoising tasks.

Unsupervised (or self-supervised) training enables training purely from single noisy images without the need for a noisy-clean paired training dataset, thus improving the accessibility of the method’s application to microscopy data. Indeed, it is often difficult to gather paired high-quality ground-truth images in scientific microscopy, which further motivates self-supervised approaches. In addition, self-supervised methods offer the major advantage of enabling training on the very same data that is to be denoised, mitigating the risk of potential domain mismatches between train and test data.

We, therefore, invite the participants of this challenge to submit deep-learning methods capable of restoring noisy images that do not require supervised training. For each challenge leaderboard, we provide an associated training dataset composed of noisy images. 

After training your method, you need to encapsulate your prediction code in a Docker container. This container should iterate over all training images, predicting and storing a cleaner version for each one. An example to start from can be found here: Work in progress.

After submitting your prediction container, we will automatically evaluate the predicted images against their clean (and non-public) counterparts. This evaluation will leverage the PSNR and SSIM metrics to gauge the effectiveness of the denoising process; further details are available on the “Evaluation and metrics” page.

We strongly believe in FAIR guiding principles in science, so we highly encourage participants to make both training and prediction codes public. Only participants who make both training and prediction code public will be eligible for the winner title.


Leaderboards

We invite researchers to apply deep learning algorithms to four datasets featuring two types of noise: structured and unstructured. You can find more about the data for each leaderboard on the “Data description” page.

Here is the list of the datasets  for each leaderboard of this challenge:

Structured noise 1: Fluorescence Microscopy Datasets for Training Deep Neural Networks by Hagen et al. 

Guy M Hagen, Justin Bendesky, Rosa Machado, Tram-Anh Nguyen, Tanmay Kumar, Jonathan Ventura, Fluorescence microscopy datasets for training deep neural networks, GigaScience, Volume 10, Issue 5, May 2021, giab032, https://doi.org/10.1093/gigascience/giab032


Structured noise 2: SUPPORT (Statistically unbiased prediction enables accurate denoising of voltage imaging data) method

Eom, M., Han, S., Park, P. et al. Statistically unbiased prediction enables accurate denoising of voltage imaging data. Nat Methods 20, 1581–1592 (2023). https://doi.org/10.1038/s41592-023-02005-8


Unstructured noise 1: JUMP Cell painting Datasets

We used the JUMP Cell Painting datasets (Chandrasekaran et al., 2023), available from the Cell Painting Gallery on the Registry of Open Data on AWS (https://registry.opendata.aws/cellpainting-gallery/).

Unstructured noise 2: W2S: Microscopy Data with Joint Denoising and Super-Resolution for Widefield to SIM Mapping

Zhou, R., El Helou, M., Sage, D., Laroche, T., Seitz, A., Süsstrunk, S. (2020). W2S: Microscopy Data with Joint Denoising and Super-Resolution for Widefield to SIM Mapping. In: Bartoli, A., Fusiello, A. (eds) Computer Vision – ECCV 2020 Workshops. ECCV 2020. Lecture Notes in Computer Science(), vol 12535. Springer, Cham. https://doi.org/10.1007/978-3-030-66415-2_31

You should submit methods in the form of a prediction Docker container separately to each of the leaderboards.