Title: | An R-Shiny Application for Calculating Cohen's and Fleiss' Kappa |
---|---|
Description: | Offers a graphical user interface for the evaluation of inter-rater agreement with Cohen's and Fleiss' Kappa. The calculation of kappa statistics is done using the R package 'irr', so that 'KappaGUI' is essentially a Shiny front-end for 'irr'. |
Authors: | Frédéric Santos |
Maintainer: | Frédéric Santos <[email protected]> |
License: | GPL (>= 2) |
Version: | 2.0.2 |
Built: | 2024-11-15 03:35:46 UTC |
Source: | https://github.com/cran/KappaGUI |
Offers a graphical user interface for the evaluation of inter-rater agreement with Cohen's and Fleiss' Kappa. The calculation of kappa statistics is done using the R package 'irr', so that 'KappaGUI' is essentially a Shiny front-end for 'irr'.
Package: | KappaGUI |
Type: | Package |
Version: | 2.0.2 |
Date: | 2018-03-22 |
License: | GPL >=2 |
Frédéric Santos, [email protected]
Cohen, J. (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.
Cohen, J. (1968) Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213–220.
irr::kappa2
## Not run: StartKappa()
## Not run: StartKappa()
This function is based on the function 'kappa2' from the package 'irr', and simply adds the possibility of calculating several kappas at once.
kappaCohen(data, weight="unweighted")
kappaCohen(data, weight="unweighted")
data |
dataframe with |
weight |
character string specifying the weighting scheme ("unweighted", "equal" or "squared"). See the function ‘kappa2’ from the package ‘irr’. |
For each trait, only complete cases are used for the calculation.
A dataframe with rows (one per trait) and three columns, giving respectively the kappa value for each trait, the number of individuals used to calculate this value, and the associated
-value.
Frédéric Santos, [email protected]
Cohen, J. (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.
Cohen, J. (1968) Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213–220.
irr::kappa2
# Here we create and display an artifical dataset, # describing two traits coded by two raters: scores <- data.frame( Trait1_A = c(1,0,2,1,1,1,0,2,1,1), Trait1_B = c(1,2,0,1,2,1,0,1,2,1), Trait2_A = c(1,4,5,2,3,5,1,2,3,4), Trait2_B = c(2,5,2,2,4,5,1,3,1,4) ) scores # Retrieve Cohen's kappa for Trait1 and Trait2, # to evaluate inter-rater agreement between raters A and B: kappaCohen(scores, weight="unweighted") kappaCohen(scores, weight="squared")
# Here we create and display an artifical dataset, # describing two traits coded by two raters: scores <- data.frame( Trait1_A = c(1,0,2,1,1,1,0,2,1,1), Trait1_B = c(1,2,0,1,2,1,0,1,2,1), Trait2_A = c(1,4,5,2,3,5,1,2,3,4), Trait2_B = c(2,5,2,2,4,5,1,3,1,4) ) scores # Retrieve Cohen's kappa for Trait1 and Trait2, # to evaluate inter-rater agreement between raters A and B: kappaCohen(scores, weight="unweighted") kappaCohen(scores, weight="squared")
raters for all
-uplets of columns in a given dataframe
This function is based on the function 'kappam.fleiss' from the package 'irr', and simply adds the possibility of calculating several kappas at once.
kappaFleiss(data, nb_raters=3)
kappaFleiss(data, nb_raters=3)
data |
dataframe with |
nb_raters |
integer for the number of raters. |
For each trait, only complete cases are used for the calculation.
A dataframe with rows (one per trait) and two columns, giving respectively the kappa value for each trait, and the number of individuals used to calculate this value.
Frédéric Santos, [email protected]
Cohen, J. (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.
Cohen, J. (1968) Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213–220.
irr::kappam.fleiss
# Here we create and display an artifical dataset, # describing two traits coded by three raters: scores <- data.frame( Trait1_A = c(1,0,2,1,1,1,0,2,1,1), Trait1_B = c(1,2,0,1,2,1,0,1,2,1), Trait1_C = c(2,2,2,1,1,1,0,1,2,1), Trait2_A = c(1,4,5,2,3,5,1,2,3,4), Trait2_B = c(2,5,2,2,4,5,1,3,1,4), Trait2_C = c(2,4,3,2,4,5,2,2,3,4) ) scores # Retrieve Fleiss' kappa for Trait1 and Trait2, # to evaluate inter-rater agreement between raters A, B and C: kappaFleiss(scores, nb_raters=3)
# Here we create and display an artifical dataset, # describing two traits coded by three raters: scores <- data.frame( Trait1_A = c(1,0,2,1,1,1,0,2,1,1), Trait1_B = c(1,2,0,1,2,1,0,1,2,1), Trait1_C = c(2,2,2,1,1,1,0,1,2,1), Trait2_A = c(1,4,5,2,3,5,1,2,3,4), Trait2_B = c(2,5,2,2,4,5,1,3,1,4), Trait2_C = c(2,4,3,2,4,5,2,2,3,4) ) scores # Retrieve Fleiss' kappa for Trait1 and Trait2, # to evaluate inter-rater agreement between raters A, B and C: kappaFleiss(scores, nb_raters=3)
Launches the R-Shiny application. The user can retrieve inter-rater agreement scores from a file (.CSV or .TXT) loaded directly through the graphical interface.
StartKappa()
StartKappa()
Data importation is done directly through the graphical user interface. Only CSV and TXT files are accepted.
If there are variables observed by
raters on
individuals, the input file should be a data frame with
rows and (
) columns. The first
columns represent the scores attributed by the
raters for the first variable; the next
columns represent the scores attributed by the
raters for the second variable; etc. Cohen's or Fleiss' kappas are returned for each variable.
The data file must contains a header, and the columns must be labeled as follows: ‘VariableName_X’, where X is a unique character (letter or number) associated with each rater. An example of correct data file with two raters is given here: http://www.pacea.u-bordeaux.fr/IMG/csv/data_Kappa_Cohen.csv.
Kappa values are calculated using the functions kappa2 and kappam.fleiss from the package ‘irr’. Please check their help pages for more technical details, in particular about the weighting options for Cohen's kappa. For ordered factors, linear or quadratic weighting could be a good choice, as they give more importance to strong disgreements. If linear or quadratic weighting are chosen, the levels of the factors will be supposed to be ordered alphabetically (as a consequence, a factor with three levels "Low", "Medium" and "High" would be ordered in an inconvenient way: in this case, please recode the levels with names matching the natural order of the levels).
The function returns no value, but the table of results can be downloaded as a CSV file through the user interface.
Frédéric Santos, [email protected]
Cohen, J. (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.
Cohen, J. (1968) Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213–220.
irr::kappa2, irr::kappam.fleiss