-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathREADME.qmd
142 lines (97 loc) · 7.72 KB
/
README.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
---
format:
gfm:
html-math-method: webtex
---
```{r, echo = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-"
)
```
# quadagree <img src="man/figures/logo.png" align="right" width="163" height="85" />
[![CRAN_Status_Badge](https://www.r-pkg.org/badges/version/quadagree)](https://cran.r-project.org/package=quadagree) [![R-CMD-check](https://github.com/JonasMoss/quadagree/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/JonasMoss/quadagree/actions/workflows/R-CMD-check.yaml)
An `R` package for calculating and doing inference the quadratically weighted multi-rater measures of agreement. Fleiss' kappa, Cohen's kappa (Conger's kappa), and the Brennan-Prediger coefficient. Has support for missing values using the methods of Moss and van Oest (work in progress) and Moss (work in progress).
```{r setup, include=FALSE}
library("quadagree")
knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(out.width = "750px", dpi = 200)
```
## Installation
The package is not available on `CRAN` yet, so use the following command from inside `R`:
```{r install, echo = TRUE, eval = FALSE}
# install.packages("remotes")
remotes::install_github("JonasMoss/quadagree")
```
## Usage
Call the `library` function and load the data of Zapf et al. (2016):
```{r prepare, echo = TRUE, eval = TRUE}
library("quadagree")
head(dat.zapf2016)
```
Then calculate an asymptotically distribution-free confidence interval for $\kappa$,
```{r}
fleissci(dat.zapf2016)
```
You can also calculate confidence intervals for Conger's kappa (Cohen's kappa) and the Brennan-Prediger coefficient.
```{r}
#| cache: true
congerci(dat.zapf2016)
```
## Support for missing values
The inferential methods have support for missing values, using pairwise available information in the biased sample covariance matrix. We use the asymptotic method of van Praag (1985).
The data from Klein (2018) contains missing values.
```{r}
#| cache: true
head(dat.klein2018)
```
The estimates returned by `congerci`, `quadagree` and `bpci` are consistent.
```{r}
#| cache: true
congerci(dat.klein2018)
```
## Supported inferential techniques
`quadagree` supports three basic asymptotic confidence interval constructions. The asymptotically distribution-free interval, the pseudo-elliptical interval, and the normal method.
| Method | Description |
|------------|------------------------------------------------------------|
| `adf` | The asymptotic distribution free method. The method is asymptotically correct, but has poor small-sample performance. |
| `elliptical` | The elliptical or pseudo-elliptical kurtosis correction. Uses the unbiased sample estimator of the common kurtosis (Joanes, 1998). Has better small-sample performance than `adf` and `normal` if the kurtosis is large and $n$ is small. |
| `normal` | Assumes normality of $X$. This method is not recommended since it yields too short confidence intervals when the excess kurtosis of $X$ is larger than $0$. |
In addition, you may transform the intervals using one of four transforms:
1. The [Fisher transform](https://en.wikipedia.org/wiki/Fisher_transformation), or $\kappa\mapsto \operatorname{artanh}(\kappa)$. Famously used in inference for the correlation coefficient.
2. The $\log$ transform, where $\kappa \mapsto \log(1-\kappa)$. This is an asymptotic pivot under the elliptical model with parallel items.
3. The identity transform. The default option.
4. The [$\arcsin$ transform](https://en.wikipedia.org/wiki/Inverse_trigonometric_functions). This transform might fail when $n$ is small, as negative values for $\hat{\kappa}$ is possible, but $\arcsin$ do not accept them,
The option `bootstrap` does studentized bootstrapping Efron, B. (1987) with `n_reps` repetitions. If `bootstrap = FALSE`, an ordinary normal approximation will be used. The studentized bootstrap intervals are is a second-order correct, so its confidence intervals will be better than the normal approximation when $n$ is sufficiently large.
## Data on wide form
Some agreement data is recorded on *wide form* instead of *long form*. Here each row contains all the possible ratings of an item along with the total number of ratings for that item. The data of Fleiss (1971) is on this form
```{r}
head(dat.fleiss1971)
```
Provided the raters are exchangeable in the sense that the ratings are conditionally independent given the item, consistent inference for the Fleiss' kappa and the Brennan--Prediger coefficient is possible using `fleiss_aggr` and `bp_aggr`.
```{r}
fleissci_aggr(dat.fleiss1971)
```
The results agree with `irrCAC`.
```{r}
irrCAC::fleiss.kappa.dist(dat.fleiss1971, weights = "quadratic")
```
## Similar software
There are several `R` packages that calculate agreement coefficients. The most feature complete `irrCAC`, which supports calculation and inference for agreement coefficients with more weightings than the quadratic. However, it does not support consistent inference in the presence of missing data, as demonstrated in the consistency vignette.
## How to Contribute or Get Help
If you encounter a bug, have a feature request or need some help, open a [Github issue](https://github.com/JonasMoss/quadagree/issues). Create a pull requests to contribute. This project follows a [contributer code of conduct](https://www.contributor-covenant.org/version/2/1/code_of_conduct/code_of_conduct.md).
## References
- Moss, van Oest (work in progress). Inference for quadratically weighted multi-rater kappas with missing raters.
- Moss (work in progress). On the Brennan–Prediger coefficients.
- Cohen, J. (1968). Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4), 213–220. https://doi.org/10.1037/h0026256
- Conger, A. J. (1980). Integration and generalization of kappas for multiple raters. Psychological Bulletin, 88(2), 322–328. https://doi.org/10.1037/0033-2909.88.2.322
- Fleiss, J. L. (1975). Measuring agreement between two judges on the presence or absence of a trait. Biometrics, 31(3), 651–659. https://www.ncbi.nlm.nih.gov/pubmed/1174623
- Joanes, D. N., & Gill, C. A. (1998). Comparing measures of sample skewness and kurtosis. Journal of the Royal Statistical Society: Series D (The Statistician), 47(1), 183-189. https://doi.org/10.1111/1467-9884.00122
- Lin, L. I. (1989). A concordance correlation coefficient to evaluate reproducibility. Biometrics, 45(1), 255–268. https://www.ncbi.nlm.nih.gov/pubmed/2720055
- Joanes, D. N., & Gill, C. A. (1998). Comparing measures of sample skewness and kurtosis. Journal of the Royal Statistical Society: Series D (The Statistician), 47(1), 183-189. https://doi.org/10.1111/1467-9884.00122
- Klein, D. (2018). Implementing a General Framework for Assessing Interrater Agreement in Stata. The Stata Journal, 18(4), 871–901. https://doi.org/10.1177/1536867X1801800408
- Van Praag, B. M. S., Dijkstra, T. K., & Van Velzen, J. (1985). Least-squares theory based on general distributional assumptions with an application to the incomplete observations problem. Psychometrika, 50(1), 25–36. https://doi.org/10.1007/BF02294145
- Zapf, A., Castell, S., Morawietz, L. et al. Measuring inter-rater
reliability for nominal data – which coefficients and confidence
intervals are appropriate?. BMC Med Res Methodol 16, 93 (2016). https://doi.org/10.1186/s12874-016-0200-9