OHDSI Home | Forums | Wiki | Github

Net Reclassification Improvement - Model Metrics

Hello Everyone,

This question is regarding modeling and performance measurement. Let’s say we have a binary classification problem. I read that AUC may not be the reliable metric to always find the effect of new variable to model. That’s when I came across a new metric called NRI - Net Reclassification Improvement for comparing the model performance with and without the addition of new variable.

Has anyone tried that here? and do we have any python package that can do this?

Just the confusion matrix of both models (one with new extra feature and other with usual features) is enough to compute NRI?

Can anyone help me on this?

Hi Akshay,

The PredictionComparison package implements this: https://github.com/OHDSI/PredictionComparison/blob/master/R/metrics.R - if you develop two models using the PatientLevelPrediction package then NRI(runPlp1, runPlp2) will calculate the NRI for a range of thresholds.

1 Like
t