Package 'KappaGUI'

Title: An R-Shiny Application for Calculating Cohen's and Fleiss' Kappa
Description: Offers a graphical user interface for the evaluation of inter-rater agreement with Cohen's and Fleiss' Kappa. The calculation of kappa statistics is done using the R package 'irr', so that 'KappaGUI' is essentially a Shiny front-end for 'irr'.
Authors: Frédéric Santos
Maintainer: Frédéric Santos <[email protected]>
License: GPL (>= 2)
Version: 2.0.2
Built: 2024-11-15 03:35:46 UTC
Source: https://github.com/cran/KappaGUI

Help Index


An R-Shiny application for calculating Cohen's and Fleiss' Kappa

Description

Offers a graphical user interface for the evaluation of inter-rater agreement with Cohen's and Fleiss' Kappa. The calculation of kappa statistics is done using the R package 'irr', so that 'KappaGUI' is essentially a Shiny front-end for 'irr'.

Details

Package: KappaGUI
Type: Package
Version: 2.0.2
Date: 2018-03-22
License: GPL >=2

Author(s)

Frédéric Santos, [email protected]

References

Cohen, J. (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.

Cohen, J. (1968) Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213–220.

See Also

irr::kappa2

Examples

## Not run:  StartKappa()

Calculates Cohen's kappa for all pairs of columns in a given dataframe

Description

This function is based on the function 'kappa2' from the package 'irr', and simply adds the possibility of calculating several kappas at once.

Usage

kappaCohen(data, weight="unweighted")

Arguments

data

dataframe with 2×p2 \times p columns, pp being the number of traits coded by the two raters. The first two columns represent the scores attributed by the two raters for the first trait; the next two columns represent the scores attributed by the two raters for the second trait; etc. The dataframe must contains a header, and each column must be labeled as follows: ‘VariableName_X’, where X is a unique character (letter or number) associated with each rater (cf. below for an example).

weight

character string specifying the weighting scheme ("unweighted", "equal" or "squared"). See the function ‘kappa2’ from the package ‘irr’.

Details

For each trait, only complete cases are used for the calculation.

Value

A dataframe with pp rows (one per trait) and three columns, giving respectively the kappa value for each trait, the number of individuals used to calculate this value, and the associated pp-value.

Author(s)

Frédéric Santos, [email protected]

References

Cohen, J. (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.

Cohen, J. (1968) Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213–220.

See Also

irr::kappa2

Examples

# Here we create and display an artifical dataset,
# describing two traits coded by two raters:
scores <- data.frame(
	Trait1_A = c(1,0,2,1,1,1,0,2,1,1),
	Trait1_B = c(1,2,0,1,2,1,0,1,2,1),
	Trait2_A = c(1,4,5,2,3,5,1,2,3,4),
	Trait2_B = c(2,5,2,2,4,5,1,3,1,4)
	)
scores

# Retrieve Cohen's kappa for Trait1 and Trait2,
# to evaluate inter-rater agreement between raters A and B:
kappaCohen(scores, weight="unweighted")
kappaCohen(scores, weight="squared")

Calculates Fleiss' kappa between kk raters for all kk-uplets of columns in a given dataframe

Description

This function is based on the function 'kappam.fleiss' from the package 'irr', and simply adds the possibility of calculating several kappas at once.

Usage

kappaFleiss(data, nb_raters=3)

Arguments

data

dataframe with k×pk \times p columns, kk being the number of raters, and pp the number of traits. The first kk columns represent the scores attributed by the kk raters for the first trait; the next kk columns represent the scores attributed by the kk raters for the second trait; etc. The dataframe must contains a header, and each column must be labeled as follows: ‘VariableName_X’, where X is a unique character (letter or number) associated with each rater (cf. below for an example).

nb_raters

integer for the number of raters.

Details

For each trait, only complete cases are used for the calculation.

Value

A dataframe with pp rows (one per trait) and two columns, giving respectively the kappa value for each trait, and the number of individuals used to calculate this value.

Author(s)

Frédéric Santos, [email protected]

References

Cohen, J. (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.

Cohen, J. (1968) Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213–220.

See Also

irr::kappam.fleiss

Examples

# Here we create and display an artifical dataset,
# describing two traits coded by three raters:
scores <- data.frame(
	Trait1_A = c(1,0,2,1,1,1,0,2,1,1),
	Trait1_B = c(1,2,0,1,2,1,0,1,2,1),
	Trait1_C = c(2,2,2,1,1,1,0,1,2,1),
	Trait2_A = c(1,4,5,2,3,5,1,2,3,4),
	Trait2_B = c(2,5,2,2,4,5,1,3,1,4),
	Trait2_C = c(2,4,3,2,4,5,2,2,3,4)
	)
scores

# Retrieve Fleiss' kappa for Trait1 and Trait2,
# to evaluate inter-rater agreement between raters A, B and C:
kappaFleiss(scores, nb_raters=3)

A graphical user interface for calculating Cohen's and Fleiss' Kappa

Description

Launches the R-Shiny application. The user can retrieve inter-rater agreement scores from a file (.CSV or .TXT) loaded directly through the graphical interface.

Usage

StartKappa()

Details

Data importation is done directly through the graphical user interface. Only CSV and TXT files are accepted.

If there are pp variables observed by kk raters on nn individuals, the input file should be a data frame with nn rows and (k×pk \times p) columns. The first kk columns represent the scores attributed by the kk raters for the first variable; the next kk columns represent the scores attributed by the kk raters for the second variable; etc. Cohen's or Fleiss' kappas are returned for each variable.

The data file must contains a header, and the columns must be labeled as follows: ‘VariableName_X’, where X is a unique character (letter or number) associated with each rater. An example of correct data file with two raters is given here: http://www.pacea.u-bordeaux.fr/IMG/csv/data_Kappa_Cohen.csv.

Kappa values are calculated using the functions kappa2 and kappam.fleiss from the package ‘irr’. Please check their help pages for more technical details, in particular about the weighting options for Cohen's kappa. For ordered factors, linear or quadratic weighting could be a good choice, as they give more importance to strong disgreements. If linear or quadratic weighting are chosen, the levels of the factors will be supposed to be ordered alphabetically (as a consequence, a factor with three levels "Low", "Medium" and "High" would be ordered in an inconvenient way: in this case, please recode the levels with names matching the natural order of the levels).

Value

The function returns no value, but the table of results can be downloaded as a CSV file through the user interface.

Author(s)

Frédéric Santos, [email protected]

References

Cohen, J. (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.

Cohen, J. (1968) Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213–220.

See Also

irr::kappa2, irr::kappam.fleiss