Submission Instructions

Submission and Phases Overview

Submissions to the  HNTS-MRG 2024 challenge should be in the form of inference containers submitted as Grand Challenge algorithms. Grand Challenge algorithms are Docker containers that encapsulate the trained AI models—including both the architecture and model weights—and all necessary components. These components enable the container to load cases (MRI images), generate predictions (segmentation masks), and store the outputs (*.mha segmentation files) for subsequent evaluation. You can review the official Grand Challenge documentation pages on Creating an Algorithm for more details. Please note that you should use your own computing resources or your choice of public platform for model training.

For each task (Task 1: pre-RT segmentation, Task 2: mid-RT segmentation), there are two phases: a preliminary development phase and a final test phase. Both phases will be open simultaneously on August 15th, 2024.

  • Preliminary Development Phase: This phase is designed for debugging algorithms and familiarizing yourself with the Docker submission framework. You are allowed up to 5 valid submissions during this phase. Results from the development phase will not impact rankings; this phase is purely for practice. Participation in the development phase is optional but highly encouraged. The data used in this phase will not be used again in the final test phase. This phase is simply used for a sanity check. 
  • Final Test Phase: This phase is for the official evaluation and ranking of your algorithm. You will have only 1 valid submission in this phase. In other words, each team will only be able to submit 1 algorithm/model for this final test set. 

Additional Preliminary Development Phase Details:

  • Composed of 2 patients not in training set.
  • Simple cases with easy segmentation targets have been selected to facilitate debugging.
  • Development phase results will be immediately shown on the leaderboard.
  • If you have developed multiple algorithms for a Task, we advise against using the preliminary development phase results to decide which algorithm to submit for the final test phase. This is because the preliminary phase has too small a sample size to draw reliable conclusions. You are likely better off performing your own internal validation processes to select your best model. Again this phase is mainly for debugging/sanity checks

Additional Final Test Phase Details:

  • Composed of 50 patients not in training set or development phase.
  • The test cases' distribution (e.g., image characteristics, tumor subsite, response status) mirrors that of the training cases.
  • Final test phase results will be hidden until the end of the submission period.

Packaging Your Algorithm into a Docker Container Image

Video tutorial version here: https://youtu.be/flpGBMqxtsM.

We have created a GitHub repository with extensive documentation and examples on how to package your Python inference scripts into a Docker container image. Instead of providing detailed instructions here, please refer to the tutorial on the GitHub repo.  By following the instructions in the GitHub repo, you will be able to create a ZIP file containing your Docker container image. Please note that the ZIP file should be under 10 GB (recommendation by Grand Challenge, but not a hard limit). Before implementing your own algorithm using any of these templates, we recommend that you first try using an unaltered template. 

Note 1: The size, spacing, origin, and direction of the generated prediction masks should be the same as the corresponding MRI for the given task (i.e., pre-RT image for Task 1, mid-RT image for Task 2).

Note 2: All Docker containers submitted to the challenge will be run in an offline setting (i.e. they will not have access to the internet, and cannot download/upload any resources). All necessary resources (e.g. pre-trained weights) must be encapsulated in the submitted containers a priori.

Grand Challenge Submission Steps

Video tutorial version here: https://youtu.be/B6li_TY6dxo.

IMPORTANT: To submit algorithms to Grand Challenge you must have a verified account

If you haven't already - be sure to click the big green "Join" button for this challenge.

Below are step-by-step written instructions (with screenshots) for submitting your algorithm to the Grand Challenge website. It is divided into the following sections:

  • 1. Creating A New Algorithm Entry
  • 2. Uploading your Docker Container Image
  • 3. Optional: Viewing Algorithm Results on Example Data
  • 4. Submitting Your Algorithm
  • 5. Additional Details Only For The Development Phases.

1. Creating A New Algorithm Entry

To begin a submission, navigate to the top of the web page and click “Submit”. Please note this will not be visible until the submission start date.

This will take you to a menu where you will select which phase you want to submit to. We highly recommend trying out your algorithm in a development phase before submitting to a final test phase, but this is not required. To create a new algorithm submission for a given phase, navigate to the bottom of the page. There will be hyperlinked blue text that says “on this page” which will take you to the algorithm upload page associated with that phase. 

 

You will be prompted if you want to create a new algorithm for that particular phase/task (Red button - "Yes, I want to create an entirely new algorithm"). It will prompt you to fill in some basic information like Title, Description, and default GPU/memory requirements (these can be changed later). Also, keep in mind you can only create 3 unique algorithms per task, but you can technically upload as many “overwrites” to existing containers as you’d like (e.g., fixing bugs).

2. Uploading Your Docker Container Image

Once the algorithm entry is created, upload your Docker container image (ZIP file) to the Algorithm page by clicking “Upload Container”. You will be asked if your algorithm requires GPU support and how much memory (up to 32GB) your algorithm requires. Depending on the size of the zip file (and your internet connection), the upload process can take from 15 minutes (a few GBs) to over an hour (10 GBs).

After uploading, the Grand Challenge will perform a series of checks to ensure the container can be imported and used without issues. Note: This process can take several minutes. You can refresh the page to check the status.  If your container image was uploaded without issues it will say “Import Completed” and “Active” in green. 

3. Optional: Viewing Algorithm Results On Example Data

Once your container is imported and active, you can "Try-out" your algorithm on some sample data. This is optional but strongly encouraged! You can use one of the training images (e.g., NIfTI training data files from Zenodo) to test if predictions are as expected. IMPORTANT: Your algorithm must run in under 20 minutes (per patient) or the runtime environment will automatically exit (hard runtime limit). For reference, the runtime environment specs for Grand Challenge can be found here

The Grand Challenge website has an online results viewer which allows you to visualize your generated segmentations. You can navigate to “Results” and then click on the eyeball icon (under column “Viewer”) for a given prediction to see the output in the online graphic user interface. 

 

4. Submitting Your Algorithm

If everything looks as you expect, then you can then navigate back to the “Submit” tab and select your algorithm and submit it for that particular phase (click “Save”). Note: you can always navigate back to your particular algorithm by clicking “Algorithms” at the top of the website and then looking for the algorithm name. 

Your algorithm will be run on the hidden data for that particular phase and the results will then be sent to our evaluation container for DSCagg score calculation. Development phase results should be immediately shown on the leaderboard, while final test phase results will be hidden until the end of the submission period. 

5. Additional Details Only For The Development Phases

For the development phases only, we are providing algorithm editor job view permissions to participants. What this means is you will be able to view the output logs associated with running your algorithm for the development phases. This saves us from having to manually share the logs for each failed submission. Moreover, what this effectively also means is that you will have access to the input data for the development phases (please keep in mind this is considered private data, so should not be used to re-train any of your models). Since the development phase data is not re-used at all in the final test phase data, accessing the development logs causes no threats in terms of data leakage. Naturally, we will not be giving access to the logs for the final test phases

To view the evaluation logs for a submission, navigate to the Results section for your algorithm and then click the Detail icon for a particular submission. You can then click on “>_Logs” to see the Stdout and Stderr.  

Please reach out to us if you run into any issues (email: hntsmrg2024@gmail.com)! We are here to help and happy to jump on a video call if needed.