OHDSI Home | Forums | Wiki | Github

Patient-Level Prediction: Model performance - The AUROC plot interpretation

Hello All,
I implemented the following predictive study described in the table below. Mainly I have two doubts:

  1. Which is the most appropriate metric to report the performance of the model, the CV or test AUROC?
  2. Which of the configured hyperparameters correspond to the ROC curve plotted?

Regards,
Alonso

Table with study specification

Definition Value
Algorithm Gradient Boosting Machine
Hyper-parameters ntree:5000, max depth:4 or 7 or 10 and learning rate: 0.001 or 0.01 or 0.1 or 0.9
Covariates Gender, Age, Age Group, Measurement Value (<5, <10)
Data split 75% train, 25% test. Randomly assigned by person

Results

image

You should look at the test AUC. The CV performance will probably be to optimistic (at least on average) since the hyperparameters that maximise the CV performance are chosen for the final model.

With regards to the hyperparameters, I’m not so familiar with the shiny app. I did have a look and looks like they should open when you click the view button on this picture next to the gear icon:

However when I open it nothing shows up for GBM models. I think there might be a bug so the hyperparameters for the gradient boosting machine are not coming through.

But in terms of R objects, if plpResults is the output of your run. They should be in:

plpResults$model$trainDetails$finalModelParameters

I’ll make an issue in the plp github for the possible bug.

@egillax Thanks for your help :hugs:
I was able to check the hyperparameters on the object plpResults and with that perform a better performance analysis.
It would be useful to open an issue on Github because I can’t see the settings of my GBM model on the Shiny App. Please let me know when you do.
Alonso

t