INFO
Measures agreement between predicted and actual classifications while accounting for chance agreement.
How It Works
Cohen’s Kappa compares observed accuracy with expected accuracy (random chance):
- : Observed agreement
- : Expected agreement by chance
What to Look For
- Values range from -1 to 1
- (): Substantial agreement
- Useful for multi-class classification and inter-rater reliability