ENTRY INTO THIS CHALLENGE CONSTITUTES YOUR ACCEPTANCE OF THESE
OFFICIAL RULES.
Methods¶
Only fully automatic methods are allowed. Methods should be submitted as specified in the submission page.¶
Inference should run on a AWS g4dn.2xlarge instance using single GPU with 16 GB RAM, 8 cores CPU and 32 GB RAM.
Maximum inference time to produce sCT for a single case (one patient) should be 15 min.
One account per participant/team¶
Each participant/team can only use one account to participate in the competition. Participants who use multiple accounts will be disqualified from the competition. Each team can be composed of five participants, but only the top submitting authors will be invited to co-author the challenge paper summarizing the results.¶
Use of other training data/pre-trained models¶
The data used to train algorithms are restricted to the data provided by the challenge. Pre-trained nets may NOT be used in the challenge.¶
Code of the submitted algorithm¶
The participating teams are strongly encouraged to disclose or share their code, although not mandatory.¶
Award eligibility¶
As a condition for being ranked and considered as the challenge winner or eligible for any prize, the teams/participants must fulfil the following obligations:¶
- Present their method at the final event of the challenge at MICCAI 2023;
- Submit a paper reporting the details of the methods in a short or long (up to the teams) LNCS format;
- Sign and return all prize acceptance documents as may be required by Competition Sponsor/Organizers;
- Commit to citing the data challenge paper and the data overview paper whenever submitting the developed method for scientific and non-scientific publications.
Awards¶
The results and winner will be announced publicly, and the top teams will be invited to present their approach during the final MICCAI event.
Once participants submit via the challenge website, they will be considered fully vested in the challenge so that their performance results will become part of presentations, publications, or subsequent analyzes derived from the challenge at the discretion of the organization. Specifically, all the performance results will be made public.
Depending on the available funding, organizers reserve the possibility to award prizes to the top teams in both tasks.
🏆 Prize¶
The best five submissions of each task will be
awarded prizes distributed in cash, for a total of €10.000,- as
follows:
Award Task 1: MRI-to-CT
1. €2.200,-
2. €1.250,-
3. €850,-
4. €500,-
5. €200,-
Award Task 2: CBCT-to-CT
1. €2.200,-
2. €1.250,-
3. €850,-
4. €500,-
5. €200,-
Participation policy for organizers' institutes¶
Members of the organizers' institutes may participate in the challenge if not listed among the organizers, contributors, or data providers and did not co-author any publication with the organizers in the last year; otherwise, they are not eligible for the prizes. Organizers, contributors, or data providers may not participate in the challenge.
No private sharing outside teams¶
Privately sharing code or data outside of teams is not permitted.
Competition Timeline [18 weeks]¶
- Begin challenge: Release training cases 1/04/2023
- Training phase (8 weeks) 1/04/2023 - 31/05/2023
- Preliminary phase (15 weeks) 1/05/2023 - 22/08/2023
- Presentation of the challenge at ESTRO23 13/05/2023
- Validation phase (6 weeks) 1/06/2023 - 15/07/2023
- Test phase (5 weeks) 16/07/2023 - 22/08/2023
- Deadline for registration method paper 22/08/2023
- Announcements and invitation to present 20/09/2023
- Presentation of the challenge results (1/2) MICCAI, Vancouver, 8/10/2023
- Presentation of the challenge results (2/2) ESTRO April/May 2024
Data¶
All data images are released under a CC-BY-NC license in nifti compressed format (*.nii.gz).
Training input (MRI for task 1, CBCT for task 2 and a mask) and ground truth (CT for both the tasks) will be made available on Zenodo. Validation data will be made available after the presentation of the challenge results, and test data will be released only when the challenge will be closed (expected ~ 2028, but the date can be subject to change).
Follow-up publication¶
The SynthRAD2023 organizers will consolidate the results and submit a challenge paper (to IEEE TMI, MEDIA, LNCS issue, or similar).
Each team ranked among the top ten metrics will be invited to participate in this publication, with the requirement that they submit an algorithm summary in the form of LNCS proceedings. The organizers reserve the right to reduce the number of co-authors of the top-performing teams.
The organizers will analyze their sCT as the challenge submission system will have automatically solicited them.
Publishing the submitted method elsewhere¶
The organizers, contributors, and data providers can independently publish methods based on the challenge data after an embargo of 6 months from the challenge's final event. The embargo is counted from the final event, considering the submission date of the work. Participants can submit their results elsewhere after an embargo of 6 months; however, if they cite the overview paper, no embargo will be applied.
Other rules¶
Remaining rules are provided along with the challenge design that can be found at https://doi.org/10.5281/zenodo.7746019.