Calculate a variety of accuracy measures from observations and predictions of numerical and categorical response variables.
err_default(obs, pred)
A list with (currently) the following components, depending on the type of prediction problem:
'hard' classification: Misclassification error, overall accuracy; if two classes, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), kappa
'soft' classification: area under the ROC curve, error and accuracy at a obs>0.5 dichotomization, false-positive rate (FPR; 1-specificity) at 70, 80 and 90 percent sensitivity, true-positive rate (sensitivity) at 80, 90 and 95 percent specificity.
regression: Bias, standard deviation, mean squared error, MAD (mad), median, interquartile range (IQR) of residuals
NA
values are currently not handled by this function,
i.e. they will result in an error.
ROCR
obs <- rnorm(1000)
# Two mock (soft) classification examples:
err_default(obs > 0, rnorm(1000)) # just noise
#> $auroc
#> [1] 0.5144468
#>
#> $error
#> [1] 0.484
#>
#> $accuracy
#> [1] 0.516
#>
#> $sensitivity
#> [1] 0.3202479
#>
#> $specificity
#> [1] 0.6996124
#>
#> $fpr70
#> [1] 0.6589147
#>
#> $fpr80
#> [1] 0.8023256
#>
#> $fpr90
#> [1] 0.9069767
#>
#> $tpr80
#> [1] 0.2293388
#>
#> $tpr90
#> [1] 0.0785124
#>
#> $tpr95
#> [1] 0.05165289
#>
#> $events
#> [1] 484
#>
#> $count
#> [1] 1000
#>
err_default(obs > 0, obs + rnorm(1000)) # some discrimination
#> $auroc
#> [1] 0.8507352
#>
#> $error
#> [1] 0.238
#>
#> $accuracy
#> [1] 0.762
#>
#> $sensitivity
#> [1] 0.6342975
#>
#> $specificity
#> [1] 0.8817829
#>
#> $fpr70
#> [1] 0.1802326
#>
#> $fpr80
#> [1] 0.2732558
#>
#> $fpr90
#> [1] 0.4282946
#>
#> $tpr80
#> [1] 0.7190083
#>
#> $tpr90
#> [1] 0.5929752
#>
#> $tpr95
#> [1] 0.4338843
#>
#> $events
#> [1] 484
#>
#> $count
#> [1] 1000
#>
# Three mock regression examples:
err_default(obs, rnorm(1000)) # just noise, but no bias
#> $bias
#> [1] 0.04713972
#>
#> $stddev
#> [1] 1.40456
#>
#> $rmse
#> [1] 1.404649
#>
#> $mad
#> [1] 1.371689
#>
#> $median
#> [1] 0.04761713
#>
#> $iqr
#> [1] 1.819207
#>
#> $count
#> [1] 1000
#>
err_default(obs, obs + rnorm(1000)) # some association, no bias
#> $bias
#> [1] -0.005555493
#>
#> $stddev
#> [1] 0.9838134
#>
#> $rmse
#> [1] 0.983337
#>
#> $mad
#> [1] 0.9748868
#>
#> $median
#> [1] 0.01726579
#>
#> $iqr
#> [1] 1.322765
#>
#> $count
#> [1] 1000
#>
err_default(obs, obs + 1) # perfect correlation, but with bias
#> $bias
#> [1] -1
#>
#> $stddev
#> [1] 6.646133e-17
#>
#> $rmse
#> [1] 1
#>
#> $mad
#> [1] 0
#>
#> $median
#> [1] -1
#>
#> $iqr
#> [1] 0
#>
#> $count
#> [1] 1000
#>