An Analysis of the Ratings and Interrater Reliability of High School Band Contests

Document Type

Article

Publication Title

Journal of Research in Music Education

Publication Date

4-2012

Keywords

assessment, band, contests, festivals, interrater reliability

Abstract

The purpose of this study was to examine procedures for analyzing ratings of large-group festivals and provide data with which to compare results from similar events. Data consisted of ratings from senior division concert band contests sponsored by the South Carolina Band Directors Association from 2008 to 2010. Three concert-performance and two sight-reading judges evaluated each band to determine a final rating. Research questions examined (a) frequency distributions of ratings; (b) interrater reliability as measured by Spearman correlation of individual judges’ ratings, internal consistency (α), and two forms of interrater agreement (IRA); and (c) differences in mean ratings among individual adjudicators, contest sites, years, and classifications. The average final rating for all bands (N = 353) was 1.73, with 86.7% (n = 306) earning a I/Superior or II/Excellent. Interrater correlation, IRA, and internal consistency were higher for sight-reading versus concert performance. Each of these measures rose above the .80 benchmark for good reliability, except interrater correlation and average pairwise IRA in the concert portion of the contest. Data indicated significant differences in 8 out of 18 judging panels, in contest sites in 2010, and among ensemble classifications. In this study, the author demonstrated an effective procedure for analyzing ratings of large-group festivals and identified implications for improving these events.

Comments

This article was published in Journal of Research in Music Education, Volume 60, Issue 1, February 16, 2012, https://doi.org/10.1177/0022429411434932.

Share

COinS