Animal Behavior Reliability
  • Home
  • About
  • Foundations
    • Proposal
    • Measurements >
      • Definitions
    • Team makeup
    • Training >
      • Features of test subsets
      • Assessment
    • Metrics
  • Diving deeper
    • Iterative training processes >
      • Tasks and techniques
      • Categorical data
      • Continuous data
      • Rare outcomes
    • Timeline
    • Troubleshooting
    • Reporting
  • Checklist
  • Resources

Visualization and Metrics.

Formally evaluating reliability allows others to assess our approach. We describe the metrics we often use in our scientific papers and the rationale behind each.

Visualization and metrics

There are a few common metrics or strategies used to evaluate consistency. When deciding which metric to use, consider the type of data you have, the goals of your training, and the pros and cons of each suitable method. In some cases, multiple metrics may be needed to provide a robust and trustworthy information about consistency.
  • Visual observation
  • Metrics for categorical data
  • Metrics for continuous data
  • Other metrics
<
>
Visually checking data during reliability is an important step. We recommend starting here and returning to these visualizations as you calculate metrics. Mismatches between metrics and the visual story can help you identify problems. 
Learn more ->
If your data are categorical, there are a few metrics that are commonly used. Click below to learn more.
  • Concordance
  • Correlation: Ranks
If your data are continuous, there are a few metrics that are commonly used. Click below to learn more.
  • Regression
  • Correlation: ICC
There are other methods that can be useful in describing or accounting for reliability, but are not as robust as other metrics. They may be useful for some types of data or experiments, however.
Learn more ->
< Foundations
Diving deeper >>
Picture
Picture
Picture
Proudly powered by Weebly
  • Home
  • About
  • Foundations
    • Proposal
    • Measurements >
      • Definitions
    • Team makeup
    • Training >
      • Features of test subsets
      • Assessment
    • Metrics
  • Diving deeper
    • Iterative training processes >
      • Tasks and techniques
      • Categorical data
      • Continuous data
      • Rare outcomes
    • Timeline
    • Troubleshooting
    • Reporting
  • Checklist
  • Resources