[IPAC-List] Determining rater agreement when unique rater panelsare used

Shekerjian, Rene Rene.Shekerjian at cs.state.ny.us
Fri Aug 27 12:19:00 EDT 2010


Could each panel be considered a sample, and then you would test to see if they came from the same population, perhaps based on something like the mean and the standard deviation of the within-panel ratings? I suppose this might not work if you had a lot of overlap among the panels.

I am curious to see what answers you get.

René

René Shekerjian | Testing Services Division | NYS Department of Civil Service | 518-474-3778
===========================================================================

-----Original Message-----
From: Heather Patchell [mailto:hpatchell at biddle.com]
Sent: Thursday, August 26, 2010 6:14 PM
To: IPAC-List at ipacweb.org
Subject: [IPAC-List] Determining rater agreement when unique rater panelsare used

Suppose you have interview score data from combinations of three-rater
panels that frequently changed members. The result was that virtually
all of the interview panels contained unique groups of three raters.



We originally wanted to do a generalizability study (G and D). However,
typical methods of estimating variance components (e.g., REML, urGENOVA)
failed to converge because virtually all of the panels are unique.



Is there way to calculate a level of agreement of raters with this type
of data or perhaps some way to better understand if raters were
exhibiting an acceptable level of agreement? (Note: We also tried to
correlate matched-pairs of raters, but there were also a very limited
number of matched pairs of raters that we could use).



Thanks for your insight.





Heather J Patchell

Consultant

Managing Editor, EEO Insight

Biddle Consulting Group, Inc.

193 Blue Ravine Suite 270 | Folsom, CA 95630

P: 916.294.4250 x 155 | 800.999.0834 x 155

F: 916.294.4255

www.biddle.com <http://www.biddle.com> | www.BCGinstitute.com
<http://www.BCGinstitute.com>












More information about the IPAC-List mailing list