Animal Behavior Reliability
  • Home
  • About
  • Foundations
    • Proposal
    • Measurements >
      • Definitions
    • Team makeup
    • Training >
      • Features of test subsets
      • Assessment
    • Metrics
  • Diving deeper
    • Iterative training processes >
      • Tasks and techniques
      • Categorical data
      • Continuous data
      • Rare outcomes
    • Timeline
    • Troubleshooting
    • Reporting
  • Checklist
  • Resources

Correlation: CCC.​

Categorical data, statistical test
Concordance Correlation Coefficient (CCC) is used to measure the strength of an association between continuous variables, similar to how concordance calculations are used for categorical data. CCC is typically considered to be robust.
​​Assumptions:
  • The data are continuous
  • ​2 observers are being compared
​
Pros:
  • Measures both the correlation and agreement between two sets of observations
  • Evaluates the level of both agreement and disagreement
​
Cons:
  • May produce biased results if used to analyze observer agreement of rare outcomes​
  • Highly dependent on the range of measurements; measurements that fall within a wide range can lead to higher and more forgiving CCC than measurements with similar levels of disagreement that fall within a tighter range. For example, if the range of reported values for a test fall between 1 and 100 (wide range), the CCC will be higher than a test with values between 1 and 5 (tight range), even if a similar number of mismatches between the expert and trainee are present. ​​

How to use:
CCC is a number between -1 and 1. Values closer to -1 indicate a strong negative relationship, while those closer to 1 indicate a strong positive relationship. Values closer 0 indicate no or weak relationship. Cutoffs for what is deemed "acceptable" vary by researchers and disciplines. Some researchers suggest that values above 0.8 are excellent, while others suggest more stringent criteria. For example, McBride (2005) suggests values below 0.9 are poor, 0.9-0.95 suggest moderate agreement, and 0.96-0.99 indicate substantial agreement.

When analyzing in R, the "epiR" package by Stevenson and Sergeant (2024) can generate concordance correlation coefficients using the function "epiR.ccc."
​

More resources

  • A concordance correlation coefficient to evaluate reproducibility (1989)
  • Repeatability for Gaussian and non-Gaussian data: a practical guide for biologists (2010)​
  • Understanding Bland Altman analysis (2015)​
  • Common pitfalls in statistical analysis: Measures of agreement (2017)
< Metrics
Picture
Picture
Picture
Proudly powered by Weebly
  • Home
  • About
  • Foundations
    • Proposal
    • Measurements >
      • Definitions
    • Team makeup
    • Training >
      • Features of test subsets
      • Assessment
    • Metrics
  • Diving deeper
    • Iterative training processes >
      • Tasks and techniques
      • Categorical data
      • Continuous data
      • Rare outcomes
    • Timeline
    • Troubleshooting
    • Reporting
  • Checklist
  • Resources