Results (July 2019)

TADPOLE's first phase is complete and we have evaluated all the prize-eligible submissions.

Watch the live announcement on YouTube: https://www.youtube.com/watch?v=BFS9Sr0lhuM

Check below for a description of the evaluation dataset and the overall rankings.

General observations

  • There was no clear "one-size-fits-all" winner.
  • Data-driven approaches for both feature selection and prediction of target variables generally performed well.
  • Many teams combined different types of algorithms to produce forecasts:
    1. Most used statistical regression;
    2. Some used generic machine learning techniques that are robust and can work well for other problems; and
    3. Some used disease progression models that are specifically tailored for the current problem of disease prediction.
  • Forecasts were very good for clinical diagnosis and ventricle volume -- on the other hand, predicting ADAS turned out to be very difficult -- no team was able to generate forecasts that were significantly better than random guessing 
  • Meta-analysis results: the most-important features that helped improve predictions were DTI & CSF for clinical diagnosis, and "augmented features" for Ventricle volume prediction.
  • Throughout this page we will refer to ADAS-Cog 13 as simply ADAS.

TADPOLE Prize winners


Category Team Members Institution Country Prize
Overall best Frog Keli Liu, Paul Manser, Christina Rabe Genentech USA £5000
Clinical status Frog Keli Liu, Paul Manser, Christina Rabe Genentech USA £5000
Ventricle volume EMC1 Vikram Venkatraghavan, Esther Bron, Stefan Klein Erasmus MC Netherlands £5000
Best university team Apocalypse Manon Ansart ICM, INRIA France £5000
High-School (best) Chen-MCW Gang Chen Medical College Wisconsin USA £5000
High-School (runner up) CyberBrains Ionut Buciuman, Alex Kelner, Raluca Pop, Denisa Rimocea, Kruk Zsolt Vasile Lucaciu College Romania £2500
Overall best D3 prediction GlassFrog Steven Hill, Brian Tom, Anais Rouanst, Zhiyue Huang, James Howlett, Steven Kiddle, Simon R. White, Sach Mukherjee, Bernd Taschler Cambridge University UK £2500

Overall Results

Legend:

  • MAUC – Multiclass Area Under the Curve
  • BCA – Balanced Classification Accuracy
  • MAE – Mean Absolute Error
  • WES – Weighted Error Score
  • CPA – Coverage Probability Accuracy for 50% Confidence Interval
  • ADAS – Alzheimer's Disease Assessment Scale Cognitive (13)
  • VENTS – Ventricle Volume
  • RANK (overall) – We first compute the sum of ranks from MAUC, ADAS MAE and VENTS MAE, then derive the final ranking from these sums of ranks. For example, the top entry has the lowest sum of ranks from these three categories.

Note (14 June 2019): The rankings in each prize category can be found by ordering according to Diagnosis MAUC, and ADAS and Ventricle MAE. The overall rankings below require valid submissions for every target variable. 

Overall scores — longitudinal dataset D2

RANK
FILE NAME MAUC RANK MAUC BCA ADAS RANK ADAS MAE ADAS WES ADAS CPA VENTS RANK VENTS MAE VENTS WES VENTS CPA
1.0Frog1.00.9310.8494.04.854.740.4410.00.450.330.47
2.0EMC1-Std8.00.8980.81123.56.055.400.451.50.410.290.43
3.0VikingAI-Sigmoid16.00.8750.7607.05.205.110.0211.50.450.350.20
4.0EMC1-Custom11.00.8920.79823.56.055.400.451.50.410.290.43
5.0CBIL9.00.8970.80315.05.665.650.3713.00.460.460.09
6.0Apocalypse7.00.9020.82714.05.575.570.5020.00.520.520.50
7.0GlassFrog-Average5.00.9020.8258.05.265.270.2629.00.680.600.33
8.0GlassFrog-SM5.00.9020.82517.05.775.920.2021.00.520.330.20
9.0BORREGOTECMTY19.00.8660.80820.05.905.820.395.00.430.370.40
10.0EMC-EB3.00.9070.80539.06.756.660.509.00.450.400.48
11.5lmaUCL-Covariates22.00.8520.76027.06.286.290.283.00.420.410.11
11.5CN2L-Average27.00.8430.7929.05.315.310.3516.00.490.490.33
13.0VikingAI-Logistic20.00.8650.75421.06.025.910.2611.50.450.350.20
14.0lmaUCL-Std21.00.8590.78128.06.306.330.264.00.420.410.09
15.5CN2L-RandomForest10.00.8960.79216.05.735.730.4231.00.710.710.41
15.5FortuneTellerFish-SuStaIn40.00.8060.6853.04.814.810.2114.00.490.490.18
17.0CN2L-NeuralNetwork41.00.7830.71710.05.365.360.347.00.440.440.27
18.0BenchmarkMixedEffectsAPOE35.00.8220.7492.04.754.750.3623.00.570.570.40
19.0Tohka-Ciszek-RandomForestLin17.00.8750.79622.06.036.030.1522.00.560.560.37
20.0BGU-LSTM12.00.8830.77925.06.096.120.3925.00.600.600.23
21.0DIKU-GeneralisedLog-Custom13.00.8780.79011.55.405.400.2638.51.051.050.05
22.0DIKU-GeneralisedLog-Std14.00.8770.79011.55.405.400.2638.51.051.050.05
23.0CyberBrains34.00.8230.7476.05.165.160.2426.00.620.620.12
24.0AlgosForGood24.00.8470.81013.05.465.110.1330.00.693.310.19
25.0lmaUCL-halfD126.00.8450.75338.06.536.510.316.00.440.420.13
26.0BGU-RF28.00.8380.67329.56.336.100.3517.50.500.380.26
27.0Mayo-BAI-ASU52.00.6910.6245.04.984.980.3219.00.520.520.40
28.0BGU-RFFIX32.00.8310.67329.56.336.100.3517.50.500.380.26
29.0FortuneTellerFish-Control31.00.8340.6921.04.704.700.2250.01.381.380.50
30.0GlassFrog-LCMEM-HDR5.00.9020.82531.06.346.210.4751.01.661.590.41
31.0SBIA43.00.7760.72143.07.107.380.408.00.440.310.13
32.0Chen-MCW-Stratify23.00.8480.78336.56.486.240.2336.51.011.000.11
33.0Rocket54.00.6800.51918.05.815.710.3428.00.640.640.29
34.5Chen-MCW-Std29.00.8360.77836.56.486.240.2336.51.011.000.11
34.5BenchmarkSVM30.00.8360.76440.06.826.820.4232.00.860.840.50
36.0DIKU-ModifiedMri-Custom36.50.8070.67033.56.446.440.2734.50.920.920.01
37.0DIKU-ModifiedMri-Std38.50.8060.67033.56.446.440.2734.50.920.920.01
38.0DIVE51.00.7080.56842.07.107.100.3415.00.490.490.13
39.0ITESMCEM53.00.6800.65726.06.266.260.3533.00.920.920.43
40.0BenchmarkLastVisit44.50.7740.79241.07.057.050.4527.00.630.610.47
41.0Sunshine-Conservative25.00.8450.81644.57.907.900.5043.51.121.120.50
42.0BravoLab46.00.7710.68247.08.228.220.4924.00.580.580.41
43.0DIKU-ModifiedLog-Custom36.50.8070.67033.56.446.440.2747.51.171.170.06
44.0DIKU-ModifiedLog-Std38.50.8060.67033.56.446.440.2747.51.171.170.06
45.0Sunshine-Std33.00.8250.77144.57.907.900.5043.51.121.120.50
46.0Billabong-UniAV4549.00.7200.61648.59.228.820.2941.51.090.990.45
47.0Billabong-Uni50.00.7180.62248.59.228.820.2941.51.090.990.45
48.0ATRI-Biostat-JMM42.00.7790.71051.012.8869.620.3554.01.955.120.33
49.0Billabong-Multi56.00.5410.55655.027.0119.900.4640.01.071.070.45
50.0ATRI-Biostat-MA47.00.7410.67152.012.8811.320.1953.01.845.270.23
51.0BIGS258.00.4550.48850.011.6214.650.5049.01.201.120.07
52.0Billabong-MultiAV4557.00.5270.53056.028.4521.220.4745.01.131.070.47
53.0ATRI-Biostat-LTJMM55.00.6360.56354.016.0774.650.3352.01.805.010.26
-Threedays2.00.9210.823--------
-ARAMIS-Pascal15.00.8760.850--------
-IBM-OZ-Res18.00.8680.766----46.01.151.150.50
-Orange44.50.7740.792--------
-SMALLHEADS-NeuralNet48.00.7370.60553.013.8713.870.41----
-SMALLHEADS-LinMixedEffects---46.08.097.940.04----
-Tohka-Ciszek-SMNSR---19.05.875.870.14----


The results on the D2 dataset suggest that we do not have a clear winner on all categories. While Frog had the best overall submission with the lowest sum of ranks, for each performance metric individually we had different winners: Frog (clinical diagnosis MAUC of 0.931), ARAMIS-Pascal (clinical diagnosis BCA of 0.850), FortuneTellerFish-Control (ADAS MAE and WES of 4.7), VikingAI-Sigmoid (ADAS CPA of 0.02), EMC1-Std/EMC1-Custom (ventricle MAE of 0.41 and ventricle WES or 0.29), and DIKU-ModifiedMri-Std/ DIKU-ModifiedMri-Custom (ventricle CPA of 0.01).

Overall scores — cross-sectional dataset D3

RANK
FILE NAME MAUC RANK MAUC BCA ADAS RANK ADAS MAE ADAS WES ADAS CPA VENTS RANK VENTS MAE VENTS WES VENTS CPA
1.0GlassFrog-Average3.00.8970.8265.05.865.570.253.00.680.550.24
2.0GlassFrog-LCMEM-HDR3.00.8970.8269.06.576.560.341.00.480.380.24
3.0GlassFrog-SM3.00.8970.8264.05.775.770.199.00.820.550.07
4.0Tohka-Ciszek-RandomForestLin11.00.8650.7862.04.924.920.1010.00.830.830.35
7.0VikingAI-Logistic8.00.8760.7686.05.945.910.2222.01.041.010.18
7.0Rocket10.00.8650.7713.05.275.140.3923.01.061.060.27
7.0lmaUCL-Std13.00.8540.69817.06.956.930.056.00.810.810.22
7.0lmaUCL-Covariates13.00.8540.69817.06.956.930.056.00.810.810.22
7.0lmaUCL-halfD113.00.8540.69817.06.956.930.056.00.810.810.22
10.0EMC1-Std30.00.7050.5677.06.296.190.474.00.800.620.48
11.0SBIA28.00.7790.78210.06.636.430.408.00.820.750.18
13.0BGU-LSTM6.00.8770.77614.06.756.170.3927.01.110.790.17
13.0BGU-RFFIX6.00.8770.77614.06.756.170.3927.01.110.790.17
13.0BGU-RF6.00.8770.77614.06.756.170.3927.01.110.790.17
15.0BravoLab18.00.8130.73028.08.028.020.472.00.640.640.42
16.5BORREGOTECMTY15.00.8520.7488.06.445.860.4630.01.141.020.49
16.5CyberBrains17.00.8300.7551.04.724.720.2135.01.541.540.50
18.0ATRI-Biostat-MA19.00.7990.77226.07.396.630.0411.00.930.970.10
19.5EMC-EB9.00.8690.76527.07.717.910.5021.01.031.070.49
19.5DIKU-GeneralisedLog-Std20.00.7980.68420.56.996.990.1716.50.950.950.05
21.0DIKU-GeneralisedLog-Custom21.00.7980.68120.56.996.990.1716.50.950.950.05
22.5DIKU-ModifiedLog-Std22.50.7980.68823.57.107.100.1713.50.950.950.05
22.5DIKU-ModifiedMri-Std22.50.7980.68823.57.107.100.1713.50.950.950.05
24.5DIKU-ModifiedLog-Custom24.50.7980.69123.57.107.100.1713.50.950.950.05
24.5DIKU-ModifiedMri-Custom24.50.7980.69123.57.107.100.1713.50.950.950.05
26.0Billabong-Uni31.00.7040.62611.56.696.690.3819.50.980.980.48
27.0Billabong-UniAV4532.00.7030.62011.56.696.690.3819.50.980.980.48
28.0ATRI-Biostat-JMM26.00.7940.78129.08.458.120.3418.00.971.450.37
29.0CBIL16.00.8470.78033.010.9911.650.4929.01.121.120.39
30.0BenchmarkLastVisit27.00.7850.77119.06.977.070.4233.01.170.640.11
31.0Billabong-MultiAV4533.00.6820.60330.59.309.300.4324.51.091.090.49
32.0Billabong-Multi34.00.6810.60530.59.309.300.4324.51.091.090.49
33.0ATRI-Biostat-LTJMM29.00.7320.67534.012.7463.980.3732.01.171.070.40
34.0BenchmarkSVM36.00.4940.49032.010.0110.010.4231.01.151.180.50
35.0DIVE35.00.5120.49835.016.6616.740.4134.01.421.420.34
-IBM-OZ-Res1.00.9050.830----36.01.771.770.50

Here, most submissions have worse performance compared to the equivalent predictions on the D2 longitudinal dataset, due to the lack of longitudinal, multimodal data. GlassFrog-Average had the best overall rank and obtained a diagnosis MAUC of 0.897, ADAS MAE of 5.86 and a Ventricle MAE of 0.68 (% ICV). For diagnosis prediction, IBM-OZ-Res obtained the highest clinical diagnosis scores: MAUC of 0.905 and BCA of 0.830. For ADAS predictions, CyberBrains had the best MAE and WES of 4.72. ATRI-Biostat-MA obtained the best ADAS CPA of 0.04. For Ventricle prediction, GlassFrog-LCMEM-HDR had a MAE of 0.48 (% ICV) and the best WES of 0.38, while the 6 DIKU submissions obtained the best CPA of 0.05.

Additional entries

In addition to the standard predictions and the benchmarks, we also included two consensus predictions by taking the mean (ConsensusMean) and median (ConsensusMedian) over all predictions from all participants. For D2 predictions, the ConsensusMedian submission obtained the best overall rank, obtaining MAUC of 0.925 in diagnosis prediction (second-best), 5.12 error on ADAS-Cog 13 MAE (ninth-best) and 0.38 on Ventricles MAE, the best result in this category for D2. On the other hand, ConsensusMean ranked 3rd overall on D2, with diagnosis MAUC of 0.920 (fourth-best), ADAS-Cog 13 MAE of 3.75, the best prediction in this category, and Ventricle MAE of 0.48 (rank 16). For ADAS-Cog 13 and Ventricle volume prediction, the best consensus methods reduced the error by 11% and 8% respectively compared to the best prediction from participants or benchmarks.

In order to test whether the best results have not been obtained by chance due to randomness in the test set, we evaluated n=62 (as many as number of entries) randomly perturbed predictions from the simplest benchmark, BenchmarkLastVisit, and computed the best results obtained by any of these predictions. These are shown as RandomisedBest, and obtain high scores especially for ADAS-Cog 13, ranking 3rd with a final MAE of 4.52. High performance scores are also obtained for Ventricles, ranking 14 with an MAE of 0.47, a 14% increase in error from the best forecast, while for diagnosis prediction a lower MAUC score of 0.797 is obtained, ranking 43rd. This suggests that the entries with higher MAE than RandomisedBest should be interpreted with care, as the scores and ranks could be high due to randomness in the test set. This is particularly relevant for ADAS-Cog 13 predictions, where only the BenchmarkMixedEffects and ConsensusMean got better results, suggesting all other methods are not able to predict the ADAS-Cog 13 any better than random guessing based on the last available measurement.

It is worth mentioning that, while drafting the manuscript, we discovered that dropping APOE as a covariate in the BenchmarkMixedEffectsAPOE model considerably decreases the error in ADAS prediction, so we included it as an additional entry for scientific interest.  


Additional entries for D2

RANK
FILE NAME MAUC RANK MAUC BCA ADAS RANK ADAS MAE ADAS WES ADAS CPA VENTS RANK VENTS MAE VENTS WES VENTS CPA
1.5ConsensusMedian1.00.9250.8574.05.125.010.281.00.380.330.09
1.5ConsensusMean2.00.9200.8351.03.753.540.003.00.480.450.13
3.5BenchmarkMixedEffects3.00.8460.7062.04.194.190.314.00.560.560.50
3.5RandomisedBest4.00.7970.8033.04.524.520.272.00.470.450.33

Additional entries for D3

RANK
FILE NAME MAUC RANK MAUC BCA ADAS RANK ADAS MAE ADAS WES ADAS CPA VENTS RANK VENTS MAE VENTS WES VENTS CPA
1.0ConsensusMean1.00.9170.8212.04.584.340.122.00.730.720.09
2.0ConsensusMedian2.00.9050.8173.05.445.370.191.00.710.650.10
3.0BenchmarkMixedEffects3.00.8390.7281.04.234.230.343.01.131.130.50

Confidence Intervals

Below are confidence intervals (CIs) computed for every submission, based on 50 bootstraps of the test set D4. The first figure (Fig. 1) shows CIs based on forecasts from D2, while the second (Fig. 2) shows CIs for forecasts on D3.

Fig 1. Confidence intervals for forecasts based on the longitudinal D2 prediction set.

Fig 2. Confidence intervals for forecasts based on the cross-sectional D3 prediction set.

Meta-analysis

To understand which types of features and algorithms yielded higher performance, we show here associations between predictive performance and feature selection methods, different types of features, methods for data imputation, and methods for forecasting of target variables (diagnosis, ADAS and ventricles). For each type of feature/method and each target variable (clinical diagnosis, ADAS and Ventricles), we show the distribution of estimated coefficients from a general linear model, derived from the approximated inverse hessian matrix at the maximum likelihood estimator. From this analysis we removed outliers, defined as submissions with ADAS MAE higher than 10 and Ventricle MAE higher than 0.15 (%ICV). For all plots, distributions to the right of the gray dashed vertical line are associated with better performance. 

The results in Fig. 3 below show trends that indicate what aspects of the methods could be associated with better performance. For feature selection, methods that perform manual selection of features are associated with better predictive performance in ADAS13 and Ventricles. In terms of feature types, including features from many modalities was generally associated with an increase in overall performance, except for FDG (for all target variables). Moreover, augmented features correlate with overall performance improvements especially for ventricle prediction. In terms of data imputation methods, while some differences can be observed, no clear conclusions can be drawn currently. In terms of prediction models, we notice that neural networks are more significantly associated with increased performance in ventricle prediction, while disease progression models are associated with decreased performance in prediction or clinical diagnosis and ventricles. However, given the small number of methods tested (<50) and the large number of degrees of freedom (n=21), these results should be interpreted with care.


Fig 3. Associations between the prediction of clinical diagnosis, ADAS and Ventricle volume and different strategies of (top) feature selection, (upper-middle) types of features, (lower-middle) data imputation strategies and (bottom) prediction methods for the target variables. For each type of feature/method (rows) and each target variable (columns), we show the distribution of estimated coefficients from a general linear model. Positive coefficients, where distributions lie to the right of the dashed vertical line, indicate better performance than baseline (vertical dashed line). For ADAS and Ventricle prediction, we flipped the sign of the coefficients, to consistently show better performance to the right of the vertical line.

Demographics of D1-D4 datasets

Summary of TADPOLE datasets D1-D4. Each subject has been allocated to either Control, MCI or AD group based on diagnosis at the first available visit within each dataset. The bottom table contains the number of visits with data available, by modality. For example, in D4 there were a total of 150 visits where an MRI scan was undertaken, which represented a total of 64% of all visits analysed across all subjects in D4. 


Measure D1 D2 D3 D4
Cognitively Normal
Subjects 1667 896 896 219
Number (%) 508 (30.5%) 369 (41.2%) 299 (33.4%) 94 (42.9%)
Visits per subject 8.3 (4.5) 8.5 (4.9) 1.0 (0.0) 1.0 (0.2)
Age 74.3 (5.8) 73.6 (5.7) 72.3 (6.2) 78.4 (7.0)
Gender (% male) 48.6% 47.2% 43.5% 47.9%
MMSE 29.1 (1.1) 29.0 (1.2) 28.9 (1.4) 29.1 (1.1)
Converters 18 (3.5%) 9 (2.4%)

Mild Cognitive Impairment
Number (%) 841 (50.4%) 458 (51.1%) 269 (30.0%) 90 (41.1%)
Visits per subject 8.2 (3.7) 9.1 (3.6) 1.0 (0.0) 1.1 (0.3)
Age 73.0 (7.5) 71.6 (7.2) 71.9 (7.1) 79.4 (7.0)
Gender (% male) 59.3% 56.3% 58.0% 64.4%
MMSE 27.6 (1.8) 28.0 (1.7) 27.6 (2.2) 28.1 (2.1)
Converters 117 (13.9%) 37 (8.1%)
9 (10.0%)
Alzheimer’s Disease
Number (%) 318 (19.1%) 69 (7.7%) 136 (15.2%) 29 (13.2%)
Visits per subject 4.9 (1.6) 5.2 (2.6) 1.0 (0.0) 1.1 (0.3)
Age 74.8 (7.7) 75.1 (8.4) 72.8 (7.1) 82.2 (7.6)
Gender (% male) 55.3% 68.1% 55.9% 51.7%
MMSE 23.3 (2.0) 23.1 (2.0) 20.5 (5.9) 19.4 (7.2)
Converters


9 (31.0%)





Number of visits with available data (as % of total visits)
Cognitive 8862 (69.9%) 5218 (68.1%) 753 (84.0%) 223 (95.3%)
MRI 7884 (62.2%) 4497 (58.7%) 224 (25.0%) 150 (64.1%)
FDG 2119 (16.7%) 1544 (20.2%) 0 (0.0%) 0 (0.0%)
AV45 2098 (16.6%) 1758 (23.0%) 0 (0.0%) 0 (0.0%)
AV1451 89 (0.7%) 89 (1.2%) 0 (0.0%) 0 (0.0%)
DTI 779 (6.1%) 636 (8.3%) 0 (0.0%) 0 (0.0%)
CSF 2347 (18.5%) 1458 (19.0%) 0 (0.0%) 0 (0.0%)

Description of Algorithms

Summary

We had a total of 33 participating teams, who submitted a total of 58 forecasts from D2, 34 forecasts from D3, and 6 forecasts from custom prediction sets. A total of 8 D2/D3 submissions from 6 teams did not have predictions for all three target variables, so we only computed the performance metrics for the available target variables. Another 3 submissions lacked confidence intervals for either ADAS or ventricle volume, which we imputed using default low-width confidence ranges of 2 for ADAS and 0.002 for Ventricles/ICV. 

Table 1 below summarizes the methods used in the submissions in terms of feature selection, handling of missing data, predictive models for clinical diagnosis and ADAS/Ventricles biomarkers, as well as training and prediction times. Condensed descriptions of each submitted method can be found here, while even more detailed descriptions are here (original files submitted by participants). 


Submission  Feature selection Number of features Missing data imputation Diagnosis prediction ADAS/Vent. Prediction Training time Prediction time (one subject)
AlgosForGood Manual 16+5* forward-filling Aalen model linear regression 1 minute 1 second
Apocalypse Manual 16 population average SVM linear regression 40 minutes 3 minutes
ARAMIS-Pascal Manual 20 population average Aalen model - 16 seconds 0.02 seconds
ATRI-Biostat-JMM automatic 15 random forest random forest linear mixed effects model 2 days 1 second
ATRI-Biostat-LTJMM automatic 15 random forest random forest DPM 2 days 1 second
ATRI-Biostat-MA automatic 15 random forest random forest DPM + linear mixed effects model 2 days 1 second
BGU-LSTM automatic 67 none feed-forward NN LSTM 1 day milliseconds
BGU-RF/ BGU-RFFIX automatic ~67+1340* none semi-temporal RF semi-temporal RF a few minutes milliseconds
BIGS2 automatic all Iterative Soft-Thresholded SVD RF linear regression 2.2 seconds 0.001 seconds
Billabong (all) Manual 15-16 linear regression linear scale non-parametric SM 7 hours 0.13 seconds
BORREGOSTECMTY automatic ~100 + 400* nearest-neighbour regression ensemble ensemble of regression + hazard models 18 hours 0.001 seconds
BravoLab automatic 25 hot deck LSTM LSTM 1 hour a few seconds
CBIL Manual 21 linear interpolation LSTM LSTM 1 hour one minute
Chen-MCW Manual 9 none linear regression DPM 4 hours < 1 hour
CN2L-NeuralNetwork automatic all forward-filling RNN RNN 24 hours a few seconds
CN2L-RandomForest Manual >200 forward-filling RF RF 15 minutes < 1 minute
CN2L-Average automatic all forward-filling RNN/RF RNN/RF 24 hours < 1 minute
CyberBrains Manual 5 population average linear regression linear regression 20 seconds 20 seconds
DIKU (all) semi-automatic 18 none Bayesian classifier/LDA + DPM DPM 290 seconds 0.025 seconds
DIVE Manual 13 none KDE+DPM DPM 20 minutes 0.06 seconds
EMC1 automatic 250 nearest neighbour DPM + 2D spline + SVM DPM + 2D spline 80 minutes a few seconds
EMC-EB automatic 200-338 nearest-neighbour SVM classifier SVM regressor 20 seconds a few seconds
FortuneTellerFish-Control Manual 19 nearest neighbour multiclass ECOC SVM linear mixed effects model 1 minute < 1 second
FortuneTellerFish-SuStaIn Manual 19 nearest neighbour multiclass ECOC SVM + DPM linear mixed effects model + DPM 5 hours < 1 second
Frog automatic ~70+420* none gradient boosting gradient boosting 1 hour -
GlassFrog-LCMEM-HDR semi-automatic all forward-fill multi-state model DPM + regression 15 minutes 2 minutes
GlassFrog-SM Manual 7 linear model multi-state model parametric SM 93 seconds 0.1 seconds
GlassFrog-Average semi-automatic all forward-fill/linear multi-state model DPM + SM + regression 15 minutes 2 minutes
IBM-OZ-Res Manual 10-15 filled with zero stochastic gradient boosting stochastic gradient boosting 20 minutes 0.1 seconds
ITESMCEM Manual 48 mean of previous values RF LASSO + Bayesian ridge regression 20 minutes 0.3 seconds
lmaUCL (all) Manual 5 regression multi-task learning multi-task learning 2 hours milliseconds
Mayo-BAI-ASU Manual 15 population average linear mixed effects model linear mixed effects model 20 minutes 1.3 seconds
Orange Manual 17 none clinician’s decision tree clinician’s decision tree none 0.2 seconds
Rocket manual 6 median of diagnostic group linear mixed effects model DPM 5 minutes 0.3 seconds
SBIA Manual 30-70 dropped visits with missing data SVM + density estimator linear mixed effects model 1 minute a few seconds
SPMC-Plymouth (all) Automatic 20 none ? - 1 minute
SmallHeads-NeuralNetwork automatic 376 nearest neighbour deep fully -connected NN deep fully -connected NN 40 minutes 0.06 seconds
SmallHeads-LinMixedEffects automatic ? nearest neighbour - linear mixed effects model 25 minutes 0.13 seconds
Sunshine (all) semi-automatic 6 population average SVM linear model 30 minutes < 1 minute
Threedays Manual 16 none RF - 1 minute 3 seconds
Tohka-Ciszek-SMNSR Manual ~32 nearest neighbour - SMNSR several hours a few seconds
Tohka-Ciszek-RandomForestLin Manual ~32 mean patient value RF linear model a few minutes a few seconds
VikingAI (all) Manual 10 none DPM + ordered logit model DPM 10 hours 8 seconds
BenchmaskLastVisit None 3 none constant model constant model 7 seconds milliseconds
BenchmarkMixedEffects None 3 none Gaussian model linear mixed effects model 30 seconds 0.003 seconds
BenchmarkMixedEffectsAPOE None 4 none Gaussian model linear mixed effects model 30 seconds 0.003 seconds
BenchmarkSVM Manual 6 mean of previous values SVM support vector regressor (SVR) 20 seconds 0.001 seconds

Table 1. Summary of methods used in the TADPOLE submissions. Keywords: SVM – Support Vector Machine, RF – random forest, LSTM – long short-term memory network, NN – neural network, RNN – recurrent neural network, SMNSR - Sparse Multimodal Neighbourhood Search Regression, DPM – disease progression model, KDE – kernel density estimation, LDA – linear discriminant analysis, SM – slope model, ECOC - error-correcting output codes, SVD – singular value decomposition (*) Augmented features

Participant statistics

Locations of participating teams


Team categories


Prediction methods




Organised by:  

Prize sponsors: