[IPAC-List] Differential validity vs. differential prediction--latest research

Herman Aguinis haguinis at email.gwu.edu
Sat Jun 11 07:54:59 EDT 2016


Dear IPAC Colleagues,



Thank you for such stimulating exchanges-particularly among those who are
not full-time academics! These issues are clearly important from theory and
practice perspectives and provide a fabulous opportunity for researchers and
academics to work together to produce useful knowledge and applications.

 

Although differential validity (i.e., differences in correlations between
test scores and performance across groups) is certainly relevant,
differential prediction (i.e., differences in regression coefficients) is
even more pertinent in terms of fairness because they consider differences
in standard deviations across groups. Regarding Richard's question about
"what is the latest research" on this topic, see the following article
available at http://www.hermanaguinis.com/pubs.html: 

 

*       Aguinis, H., Culpepper, S.A., & Pierce, C.A. (in press).
Differential prediction generalization in college admissions testing.
Journal of Educational Psychology. doi: 10.1037/edu0000104

 

Also, see the following closely related article as well (available at
http://www.hermanaguinis.com/pubs.html):

 

*       Aguinis, H., Culpepper, S.A., & Pierce, C.A. (2010). Revival of test
bias research in preemployment testing. Journal of Applied Psychology, 95,
648-680.

 

I look forward to continued dialogue on these important issues!

 

All the best,

 

--Herman.

 

Herman Aguinis, Ph.D.

Avram Tucker Distinguished Scholar and Professor of Management

George Washington University School of Business

2201 G Street, NW 

Washington, DC 20052

 <http://hermanaguinis.com/> http://hermanaguinis.com/ 

 

Abstract for Aguinis, Culpepper, and Pierce (in press, Journal of
Educational Psychology)

We introduce the concept of differential prediction generalization in the
context of college admissions testing. Specifically, we assess the extent to
which predicted first-year college grade-point average (GPA) based on
high-school grade point average (HSGPA) and SAT scores depends on a
student's ethnicity and gender and whether this difference varies across
samples. We compared 257,336 female and 220,433 male students across 339
samples, 29,734 Black and 304,372 White students across 247 samples, and
35,681 Hispanic and 308,818 White students across 264 samples collected from
176 colleges and universities between the years 2006 and 2008. Overall,
results show a lack of differential prediction generalization because
variability remains after accounting for methodological and statistical
artifacts including sample size, range restriction, proportion of students
across ethnicity- and gender-based subgroups, subgroup mean differences on
the predictors (i.e., HSGPA, SAT-Critical Reading, SAT-Math, and
SAT-Writing), and standard deviations for the predictors. We offer an agenda
for future research aimed at understanding several contextual reasons for a
lack of differential prediction generalization based on ethnicity and
gender. Results from such research will likely lead to a better
understanding of the reasons for differential prediction and interventions
aimed at reducing or eliminating it when it exists.

 

-----Original Message-----
From: IPAC-List [mailto:ipac-list-bounces at ipacweb.org] On Behalf Of Richard
Joines
Sent: Saturday, June 11, 2016 1:14 AM
To: IPAC List <ipac-list at ipacweb.org>; Joel Wiesen
<jwiesen at appliedpersonnelresearch.com>
Subject: Re: [IPAC-List] Michael McDaniel's Reference to the so-called
Validity-Diversity Dilemma

 

Hi Joel,

 

What's the latest on differential validity research?  I just can't force
myself to plow through another of these articles.

 

Researchers have gleefully debated test fairness for about 50 years now, and
every time it seems there is a consensus that tests are fair for all groups,
with the caveat that they may tend to be somewhat unfair to whites
(intercept differences often indicate that minority performance has been
overpredicted), we know it's just a matter of time until the debate resumes.

 

Nothing unusual for our field.  Take assessment centers -- we still have
psychologists who are dedicated to the idea that dimension ratings in
assessment centers are construct valid, no matter what the evidence
indicates.  It just makes sense to them, thus that's what will ultimately be
found if they do just one more research study.

 

If we restricted test fairness research to those studies that were able to
use objective outcomes consistent with Brogden & Taylor (The Dollar
Criterion, 1950), I believe the results for differential prediction would be
essentially the same, but we can nevertheless run some Monte Carlo
simulations, and hopefully, we can eventually reach the conclusion many want
to reach.  The I/O field can be exhausting...

 

Joel, if all criterion measures are biased, we're in a real pickle and
should switch to a field that has issues that can be resolved, like
astronomy or physics.  You agree?

 

 

 

----- Original Message -----

From: "Joel Wiesen" < <mailto:jwiesen at appliedpersonnelresearch.com>
jwiesen at appliedpersonnelresearch.com>

To: "Richard Joines" < <mailto:mpscorp at value.net> mpscorp at value.net>; "IPAC
List" 

< <mailto:ipac-list at ipacweb.org> ipac-list at ipacweb.org>

Sent: Friday, June 10, 2016 3:24 PM

Subject: Re: [IPAC-List] Michael McDaniel's Reference to the so-called
Validity-Diversity Dilemma

 

 

> Rich,

> 

> We certainly want to maximize validity, yet we need to consider 

> fairness as well.

> 

> There are indications that many measures of job performance are flawed: 

> men earn more than women, tall earn more than short, and comely earn 

> more than plain.  There is also research showing that minorities 

> encounter a more hostile work environment, so the playing field is not
level.

> 

> If our tests predict biased criteria accurately, does that mean our 

> tests are biased?

> 

> Joel

> 

> 

> 

> - -

> Joel P. Wiesen, Ph.D., Director

> Applied Personnel Research

> 62 Candlewood Road

> Scarsdale, NY 10583-6040

>  <http://www.linkedin.com/in/joelwiesen>
http://www.linkedin.com/in/joelwiesen

> (617) 244-8859

>  <http://appliedpersonnelresearch.com> http://appliedpersonnelresearch.com

> 

> 

> 

> 

> Note: This e-mail and any attachments may contain confidential and/or 

> legally privileged information. Please do not forward any contents 

> without permission. If you have received this message in error please 

> destroy all copies, completely remove it from your computer, and notify
the sender.

> Thank you.

> 

> 

> On 6/2/16 8:05 PM, Richard Joines wrote:

>> Mike,

>> 

>> You make the statement that "if job-related reading speed has 

>> undesirable consequences such as group differences, one may wish to 

>> sacrifice merit hiring for diversity hiring and increase the time 

>> limit of the exam."

>> 

>> I guess the question for those who think I/O Psychology is a science 

>> is... how does one reach the decision to throw the science out and go 

>> another route?  If the result is lowering validity, I'm certainly not 

>> about to increase the time limit of any of my empirically validated 

>> tests.  There would be no scientific basis for doing that.

>> 

>> I would be interested in what people think about this and how they 

>> view their role and what limitations they think they should observe, 

>> but my view has always been to try to maximize validity while 

>> ensuring compliance with federal guidelines.  Since the 1978 Uniform 

>> Guidelines we've been compelled to look for alternative selection 

>> methods, the idea being that if we can find or develop a test that 

>> has the same or higher validity but lower adverse impact, we should do
that.

>> 

>> *However*, the idea that we should sacrifice validity in order to 

>> increase diversity strikes me as going too far.  Who are we to make 

>> such decisions?  We're supposed to be scientists, not social engineers,
yes?

>> 

>> Thoughts anyone?

>> 

>> Rich Joines

>> Mgt & Personnel Systems, Inc.

>>  <http://www.mps-corp.com> www.mps-corp.com < <http://www.mps-corp.com>
http://www.mps-corp.com>

>> 925-932-0203

>> 

>> 

>> 

>> 

>> _______________________________________________________

>> IPAC-List

>>  <mailto:IPAC-List at ipacweb.org> IPAC-List at ipacweb.org

>>  <https://pairlist9.pair.net/mailman/listinfo/ipac-list>
https://pairlist9.pair.net/mailman/listinfo/ipac-list

>> 

> 

 

 

_______________________________________________________

IPAC-List

 <mailto:IPAC-List at ipacweb.org> IPAC-List at ipacweb.org

 <https://pairlist9.pair.net/mailman/listinfo/ipac-list>
https://pairlist9.pair.net/mailman/listinfo/ipac-list

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://pairlist9.pair.net/pipermail/ipac-list/attachments/20160611/58da5698/attachment-0001.html>


More information about the IPAC-List mailing list