This function calculates rho for a testSet, contingencyTable, or an observed kappa value with associated set parameters (testSetLength and OcSBaserate).

rho(
  x,
  OcSBaserate = NULL,
  testSetLength = NULL,
  testSetBaserateInflation = 0,
  OcSLength = 10000,
  replicates = 800,
  ScSKappaThreshold = 0.9,
  ScSKappaMin = 0.4,
  ScSPrecisionMin = 0.6,
  ScSPrecisionMax = 1
)

Arguments

x

The observed kappa value, testSet or contingencyTable that will be tested with rho

OcSBaserate

The baserate of the observed codeSet (defaults to baserate of testSet or contingencyTable)

testSetLength

The length of the testSet (ignored unless data is an observed kappa value)

testSetBaserateInflation

The minimum baserate from the sampling procedure

OcSLength

The length of the observed codeSet

replicates

The number of simulated codeSets to use in the null hypothesis distribution for rho; similar to replicates in a Monte Carlo study

ScSKappaThreshold

The maximum kappa value used to generate simulated codeSets in the null hypothesis distribution for rho

ScSKappaMin

The minimum kappa value used to generate simulated codeSets in the null hypothesis distribution for rho

ScSPrecisionMin

The minimum precision to be used for generation of simulated codeSets in the null hypothesis distribution for rho

ScSPrecisionMax

The maximum precision to be used for generation of simulated codeSets in the null hypothesis distribution for rho

Value

rho for the given parameters

rho and kappa for the given data and parameters (unless kappa is given)

Details

Rho is a Monte Carlo rejective method of interrater reliability statistics, implemented here for Cohen's Kappa. Rho constructs a collection of data sets in which kappa is below a specified threshold, and computes the empirical distribution on kappa based on the specified sampling procedure. Rho returns the percent of the empirical distribution greater than or equal to an observed kappa. As a result, Rho quantifies the type 1 error in generalizing from an observed test set to a true value of agreement between two raters.

Rho starts with an observed kappa value, calculated on a subset of a codeSet, known as an observed testSet, and a kappa threshold which indicates what is considered significant agreement between raters.

It then generates a collection of fully-coded, simulated codeSets (ScS), further described in createSimulatedCodeSet, all of which have a kappa value below the kappa threshold and similar properties as the original codeSet.

Then, kappa is calculated on a testSet sampled from each of the ScSs in the collection to create a null hypothesis distribution. These testSets mirror the observed testSets in their size and sampling method. How these testSets are sampled is futher described in getTestSet.

The null hypothesis is that the observed testSet, was sampled from a data set, which, if both raters were to code in its entirety, would result in a level of agreement below the kappa threshold.

For example, using an alpha level of 0.05, if the observed kappa is greater than 95 percent of the kappas in the null hypothesis distribution, the null hypothesis is rejected. Then one can conclude that the two raters would have acceptable agreement had they coded the entire data set.

See also

rho

Examples

# Given an observed kappa value rho(x = 0.88, OcSBaserate = 0.2, testSetLength = 80)
#> [1] 0.0925
# Given a test Set rho(x = codeSet)
#> $rho #> [1] 1 #> #> $kappa #> [1] 0.625 #> #> $recall #> [1] 0.75 #> #> $precision #> [1] 0.6 #>
# Given a contingency Table rho(x = contingencyTable)
#> $rho #> [1] 0.59875 #> #> $kappa #> [1] 0.625 #> #> $recall #> [1] 0.75 #> #> $precision #> [1] 0.6 #>