[IPAC-List] Angoff rating question
LMueller at air.org
Tue Jan 5 12:50:53 EST 2010
I don't know what's "typical" and I don't know anyone who has done a comprehensive industry study (that might be a good idea). Often, this is done to improve subgroup representation in candidate groups.
You might consider the relative cost of errors in your decision more generally. In some cases, it might be reasonable to raise the passing score above the mean of SME judgments. This is analogous to the "response probability" debate in the literacy community (see http://www.pearsonedmeasurement.com/downloads/conference/AERA08/Response%20Probability%20Criterion%20and%20Subgroup%20Performance.pdf for a discussion of the RP issue).
Lance Anderson and I have presented different sides of considering the relative costs of false positives vs. false negatives at SIOP over the last couple of years. I think they are generally focused on cases where you have criterion data, but the same principles can be applied to cases where you are unsure of your criterion data. In other words, you might simply want to determine whether or not there is enough of a relationship between your assessment and critical aspects of job performance to raise the standard, or whether the assessment-performance link is more tenuous and you don't want to risk ruling out potentially successful candidates. However this is done, the Lanning v. SEPTA case underlines the importance of clearly stating your rationale for moving the cut score, and having SME/stakeholder input for doing so.
I think if I were inclined to move a cut score I would accomplish this by providing some examples of candidate performance above/below various cuts, and solicit SME feedback on whether the candidates performance is consistent with what is required for someone coming into the position. This approach is similar to a "borderline group" method for setting cut scores, and could be useful in augmenting an Angoff approach (which, IMO, is susceptible to significant criticism when used alone). You could use field test data or simulated data (see my write up in Mort McPhail's Alternative Validation Strategies book) to provide performance examples.
Lorin Mueller, PhD, SPHR
Principal Research Scientist
American Institutes for Research
1000 Thomas Jefferson St., NW
Washington, DC, 20007
From: ipac-list-bounces at ipacweb.org [mailto:ipac-list-bounces at ipacweb.org] On Behalf Of Joel Wiesen
Sent: Tuesday, January 05, 2010 12:13 PM
To: IPAC-List at ipacweb.org
Subject: [IPAC-List] Angoff rating question
When using Angoff ratings to help set a passing score, how typical is it
to use the mean as opposed to subtracting 1.5 or 2 standard deviations
from the mean?
Joel P. Wiesen, Ph.D., Director
Applied Personnel Research
62 Candlewood Road
Scarsdale, NY 10583-6040
Note: This e-mail and any attachments are confidential. Please do not
forward any contents without permission. If you have received this
message in error please delete this item in its entirety and notify the
sender. Thank you.
IPAC-List at ipacweb.org
More information about the IPAC-List