[IPAC-List] Ricci Update Prompts a Question

Pluta, Paul ppluta at hr.lacounty.gov
Thu Dec 10 09:55:07 EST 2009


The discussion of the "internal" applicant pool is quite relevant,
especially in the case of promotional examinations. Typically, where
there is a manifest imbalance there is some lag time involved in
correcting the imbalance. Hence, recruitment drives must focus on
entry-level positions and development opportunities must focus on
getting folks promoted to correct imbalances in higher-level positions.
Obviously, if the distribution of qualified minorities is sparse in the
applicant pool an imbalance is bound to appear on the back end of any
selection procedure, regardless of whether the assessment tool has a
history of resulting in subgroup score differences. However, when all
other variables are controlled for, I believe there is a sufficient
literature to support the contention that written tests that load
heavily on the 'g' factor will almost assuredly result in disparities.



However, the bottom line here is that adverse impact is not illegal, per
se. If an organization conducts a thorough job analysis study and can
demonstrate that the test is job related and consistent with a business
necessity (given that no other testing method would have substantially
equal validity and no adverse impact), it will have a greater chance of
prevailing in a lawsuit. However, there were other political forces at
work in New Haven. It appears that the City deliberately suppressed
evidence that would have supported the validity of the test and made no
effort whatsoever to defend it. Rather, it chose to trammel the rights
of non-minority candidates who participated in a Civil Service
examination in good faith, which made it a case of disparate treatment.



Paul E. Pluta, ABD, SPHR

Human Resources Analyst III

Los Angeles County Department of Human Resources

Workforce Planning, Test Research, & Appeals Division



-----Original Message-----
From: ipac-list-bounces at ipacweb.org
[mailto:ipac-list-bounces at ipacweb.org] On Behalf Of Winfred Arthur, Jr.
Sent: Wednesday, December 09, 2009 8:12 PM
To: IPAC-List at ipacweb.org
Subject: Re: [IPAC-List] Ricci Update Prompts a Question



just echoing the gist of Dennis' points and the excellence of John's
scenario as well. the/a key issue is whether, in terms of adverse
impact, assessment tools are best conceptualized as (a) the source of
the fire ([lousy] tools that are producing the observed adverse impact)
or (b) thermometers (the source of adverse impact is extra to the
assessment tool and it is simply reflecting the outcomes of these
extra-assessment effects). (an important distinction i would like to
make -- which is implied in Dennis' comment -- is the that between
subgroup differences [which is a scientific phenomenon] and adverse
impact [which is a legal, administrative phenomenon].) within this
framework, my current thinking is that in most instances, the
thermometer conceptualization is often the explanatory mechanism and yet
most remedies seem to be based on the source-of-the-fire model which is
why they are often not predictably successful. indeed, i am convinced
that the manifestation of adverse impact is so much influenced and
determined by extra-assessment tool factors (as reflected in John's
scenarios, Richard's solutions, Dennis' comments, and Mark's references
to the "internal applicant pool" and "internal labour market") that i am
very skeptical of any a priori claims to the effect that a specified
assessment tool *will not* display adverse impact. indeed, given the
size of the sample and the distribution of specified subgroup members in
the sample on the basis of their scores, one could conceivably have
adverse impact in the absence of subgroup differences and vice versa.

so, as assessment professionals, researchers, and scientists, we can
guarantee the development of assessment tools that (1) measure the
specific constructs of interest, (2) are free of extra-construct
variance, (3) are designed to minimize subgroup differences (as a source
of extra-construct variance), and (4) are job-related such that they can
be legally defended on the basis of sound scientific and professional
standards and practices. but can we seriously and really guarantee a
priori that an assessment tool will not display adverse impact?

btw, we have a symposium that has just been accepted at SIOP where i
plan to present the above arguments/points. and thus, i am curious
about your reactions to them.

- winfred

Dennis Doverspike wrote:

John,

Your scenario is excellent because it points out that adverse impact is
both
situationally and sample (applicant group characteristics) specific.
Adverse
impact is not wholly a result of the test and in many cases with small
sample sizes may have little to do with the underlying characteristics
of
the test.

Of course, this then results in a situation, as occurred recently, where
the
almost identically test can have adverse impact against Blacks as
compared
to Whites and also against Whites as compared to Blacks in slightly
different promotional situations. So, adverse impact has no necessary
linkage to any property of the test. In addition, it is very difficult,
if
not impossible, to predict ahead of time if a test will have adverse
impact,
unless we know that so many people will be hired or promoted that it
will
not be an issue.

One could argue that is because adverse impact is basically a legal
gatekeeper and has very little to do with assessment science.

Dennis Doverspike, Ph.D., ABPP
Professor of Psychology
Director, Center for Organizational Research
Senior Fellow of the Institute for Life-Span Development and Gerontology
Psychology Department
University of Akron
Akron, Ohio 44325-4301
330-972-8372 (Office)
330-972-5174 (Office Fax)
ddoverspike at uakron.edu



-----Original Message-----
From: ipac-list-bounces at ipacweb.org
[mailto:ipac-list-bounces at ipacweb.org]
On Behalf Of John Ford
Sent: Wednesday, December 09, 2009 7:32 PM
To: IPAC-List at ipacweb.org
Subject: [IPAC-List] Ricci Update Prompts a Question

I appreciate the perspectives on the Ricci case and on best practices
with
respect to adverse impact. They raise a question in my mind that I
would
appreciate perspective on from experienced selection folks.

Suppose that you adopt targeted recruiting procedures with respect to an
underrepresented minority group. Could the following happen? And if
so,
how should it be dealt with?

--SCENARIO--

Government agency X announces that it is concerned about
underrepresentation
of minority group Y in its workforce. They adopt a number of measures
to
reach out to Y applicants, including placing something in their job
announcements like "Qualified Y applicants are encouraged to apply."

This has a subtle effect on the applicant pool. Before the targeted
recruitment, self-selection among applicants resulted in an ability
distribution around the assessment cut score that is equivalent for all
subgroups. After the targeted recruitment policy is announced, this
changes. Nonminority applicants who perceive themselves as barely
qualified
self-select out in greater numbers because they believe the policy
reduces
their chances. Minority Y applicants who are marginally qualified apply
in
greater numbers because they believe the policy increases their chances.
An
equivalent number of well-qualified applicants from all groups still
apply,
giving the agency a good, diverse pool from which to select one or two
top
applicants. But the assessment seems to have adverse impact because it
passes fewer minority Y applicants overall. It is seen as a biased and
inappropriate assessment.

--END SCENARIO--

My concern is that this can happen even with a valid assessment that
under
reasonable circumstances would not have adverse impact. It appears to
because awareness of the policy by the applicant pool, and their
understandable response to the policy, can create an applicant pool with
different ability distributions among nonminority and Y applicants.
Thus
will likely be seen as a fault in the assessment procedure rather than
as a
result of applicant response to the recruitment policy.

So, do other assessment practitioners agree that this can happen? If
so,
how could we reasonably discriminate this situation from one in which
there
is a biased assessment? Or is this not a distinction we would care to
make
because we hold to a definition of bias that sees it as present whenever
there is differential impact on demographic subgroups?

Your responses are appreciated.

John Ford
Research Psychologist
U.S. Merit Systems Protection Board

_______________________________________________________
IPAC-List
IPAC-List at ipacweb.org
http://www.ipacweb.org/mailman/listinfo/ipac-list

_______________________________________________________
IPAC-List
IPAC-List at ipacweb.org
http://www.ipacweb.org/mailman/listinfo/ipac-list



More information about the IPAC-List mailing list