Visualizing the Performance of Scoring Classifiers

ROC graphs, sensitivity/specificity curves, lift charts, and precision/recall plots are popular examples of trade-off visualizations for specific pairs of performance measures. ROCR is a flexible tool for creating cutoff-parameterized 2D performance curves by freely combining two from over 25 performance measures (new performance measures can be added using a standard interface). Curves from different cross-validation or bootstrapping runs can be averaged by different methods, and standard deviations, standard errors or box plots can be used to visualize the variability across the runs. The parameterization can be visualized by printing cutoff values at the corresponding curve positions, or by coloring the curve according to cutoff. All components of a performance plot can be quickly adjusted using a flexible parameter dispatching mechanism. Despite its flexibility, ROCR is easy to use, with only three commands and reasonable default values for all optional parameters.


This file documents changes and updates to the ROCR package.

Version 1.0-7 (Mar 26, 2015)

  • Changed maintainer email address

Version 1.0-5 (May 12, 2013)

  • Used standardized license specification in DESCRIPTION file
  • Removed LICENCE file
  • Removed .First.lib in zzz.R
  • CITATION moved into inst folder and adjusted

Version 1.0-4 (Dec 08, 2009)

  • fixes bug with 1.0-3 that prevented plot arguments getting passed through

Version 1.0-3

  • adapted to more strict R CMD CHECK rules in R > 2.9

Version 1.0-2 (Jan 27, 2007)

  • fixed minor bug in 'prediction' function concerning the optional parameter 'label.ordering' (thanks to Robert Perdisci for notifying us).
  • added an optional parameter 'fpr.stop' to the performance measure 'auc', allowing to calculate the partial area under the ROC curve up to the false positive rate given by 'fpr.stop'.

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.