[IPAC-List] Ricci Update Prompts a Question

RPClare at aol.com RPClare at aol.com
Thu Dec 10 10:32:32 EST 2009

This is a complex set of issues that impact each other but are really
separate considerations.
My first thought is that we need to keep collective bargaining agreements
limited in how they impact the selection process. The more detail that is
included, the higher the likelihood that impact will not be able to be
controlled/minimized. In the same vein, "rule of..." should be as broadly drawn
as possible if it cannot be avoided (which usually cannot be).
Second thought is that we must do the best job we can creating a valid
process given the local "legal", "political" and contractual restrictions.
Third thought is that even if we have done a "perfect" fully defensible
job, the local political, economic reality may cause a decision to "back away"
from the process. Pragmatically, this should be fully explored before the
results are published. Folks are less likely to challenge a "no go"
decision if their stake is unknown than if they know they are "reachable" if the
list is used.
Fourth thought, organizational legal counsel will have a major impact on
the go/no go decision. They often/usually are not sophisticated in our
business. The more we educate them before such a decision is necessary, the more
likely their decision will be a supportive one. While they may make a
recommendation based on the cost issues of this event, we need to help them
understand the longer range impact (and cost implications) on the selection

In a message dated 12/10/2009 9:47:51 A.M. Eastern Standard Time,
ppluta at hr.lacounty.gov writes:

The discussion of the "internal" applicant pool is quite relevant,
especially in the case of promotional examinations. Typically, where
there is a manifest imbalance there is some lag time involved in
correcting the imbalance. Hence, recruitment drives must focus on
entry-level positions and development opportunities must focus on
getting folks promoted to correct imbalances in higher-level positions.
Obviously, if the distribution of qualified minorities is sparse in the
applicant pool an imbalance is bound to appear on the back end of any
selection procedure, regardless of whether the assessment tool has a
history of resulting in subgroup score differences. However, when all
other variables are controlled for, I believe there is a sufficient
literature to support the contention that written tests that load
heavily on the 'g' factor will almost assuredly result in disparities.

However, the bottom line here is that adverse impact is not illegal, per
se. If an organization conducts a thorough job analysis study and can
demonstrate that the test is job related and consistent with a business
necessity (given that no other testing method would have substantially
equal validity and no adverse impact), it will have a greater chance of
prevailing in a lawsuit. However, there were other political forces at
work in New Haven. It appears that the City deliberately suppressed
evidence that would have supported the validity of the test and made no
effort whatsoever to defend it. Rather, it chose to trammel the rights
of non-minority candidates who participated in a Civil Service
examination in good faith, which made it a case of disparate treatment.

Paul E. Pluta, ABD, SPHR

Human Resources Analyst III

Los Angeles County Department of Human Resources

Workforce Planning, Test Research, & Appeals Division

-----Original Message-----
From: ipac-list-bounces at ipacweb.org
[mailto:ipac-list-bounces at ipacweb.org] On Behalf Of Winfred Arthur, Jr.
Sent: Wednesday, December 09, 2009 8:12 PM
To: IPAC-List at ipacweb.org
Subject: Re: [IPAC-List] Ricci Update Prompts a Question

just echoing the gist of Dennis' points and the excellence of John's
scenario as well. the/a key issue is whether, in terms of adverse
impact, assessment tools are best conceptualized as (a) the source of
the fire ([lousy] tools that are producing the observed adverse impact)
or (b) thermometers (the source of adverse impact is extra to the
assessment tool and it is simply reflecting the outcomes of these
extra-assessment effects). (an important distinction i would like to
make -- which is implied in Dennis' comment -- is the that between
subgroup differences [which is a scientific phenomenon] and adverse
impact [which is a legal, administrative phenomenon].) within this
framework, my current thinking is that in most instances, the
thermometer conceptualization is often the explanatory mechanism and yet
most remedies seem to be based on the source-of-the-fire model which is
why they are often not predictably successful. indeed, i am convinced
that the manifestation of adverse impact is so much influenced and
determined by extra-assessment tool factors (as reflected in John's
scenarios, Richard's solutions, Dennis' comments, and Mark's references
to the "internal applicant pool" and "internal labour market") that i am
very skeptical of any a priori claims to the effect that a specified
assessment tool *will not* display adverse impact. indeed, given the
size of the sample and the distribution of specified subgroup members in
the sample on the basis of their scores, one could conceivably have
adverse impact in the absence of subgroup differences and vice versa.

so, as assessment professionals, researchers, and scientists, we can
guarantee the development of assessment tools that (1) measure the
specific constructs of interest, (2) are free of extra-construct
variance, (3) are designed to minimize subgroup differences (as a source
of extra-construct variance), and (4) are job-related such that they can
be legally defended on the basis of sound scientific and professional
standards and practices. but can we seriously and really guarantee a
priori that an assessment tool will not display adverse impact?

btw, we have a symposium that has just been accepted at SIOP where i
plan to present the above arguments/points. and thus, i am curious
about your reactions to them.

- winfred

Dennis Doverspike wrote:


Your scenario is excellent because it points out that adverse impact is
situationally and sample (applicant group characteristics) specific.
impact is not wholly a result of the test and in many cases with small
sample sizes may have little to do with the underlying characteristics
the test.

Of course, this then results in a situation, as occurred recently, where
almost identically test can have adverse impact against Blacks as
to Whites and also against Whites as compared to Blacks in slightly
different promotional situations. So, adverse impact has no necessary
linkage to any property of the test. In addition, it is very difficult,
not impossible, to predict ahead of time if a test will have adverse
unless we know that so many people will be hired or promoted that it
not be an issue.

One could argue that is because adverse impact is basically a legal
gatekeeper and has very little to do with assessment science.

Dennis Doverspike, Ph.D., ABPP
Professor of Psychology
Director, Center for Organizational Research
Senior Fellow of the Institute for Life-Span Development and Gerontology
Psychology Department
University of Akron
Akron, Ohio 44325-4301
330-972-8372 (Office)
330-972-5174 (Office Fax)
ddoverspike at uakron.edu

-----Original Message-----
From: ipac-list-bounces at ipacweb.org
[mailto:ipac-list-bounces at ipacweb.org]
On Behalf Of John Ford
Sent: Wednesday, December 09, 2009 7:32 PM
To: IPAC-List at ipacweb.org
Subject: [IPAC-List] Ricci Update Prompts a Question

I appreciate the perspectives on the Ricci case and on best practices
respect to adverse impact. They raise a question in my mind that I
appreciate perspective on from experienced selection folks.

Suppose that you adopt targeted recruiting procedures with respect to an
underrepresented minority group. Could the following happen? And if
how should it be dealt with?


Government agency X announces that it is concerned about
of minority group Y in its workforce. They adopt a number of measures
reach out to Y applicants, including placing something in their job
announcements like "Qualified Y applicants are encouraged to apply."

This has a subtle effect on the applicant pool. Before the targeted
recruitment, self-selection among applicants resulted in an ability
distribution around the assessment cut score that is equivalent for all
subgroups. After the targeted recruitment policy is announced, this
changes. Nonminority applicants who perceive themselves as barely
self-select out in greater numbers because they believe the policy
their chances. Minority Y applicants who are marginally qualified apply
greater numbers because they believe the policy increases their chances.
equivalent number of well-qualified applicants from all groups still
giving the agency a good, diverse pool from which to select one or two
applicants. But the assessment seems to have adverse impact because it
passes fewer minority Y applicants overall. It is seen as a biased and
inappropriate assessment.


My concern is that this can happen even with a valid assessment that
reasonable circumstances would not have adverse impact. It appears to
because awareness of the policy by the applicant pool, and their
understandable response to the policy, can create an applicant pool with
different ability distributions among nonminority and Y applicants.
will likely be seen as a fault in the assessment procedure rather than
as a
result of applicant response to the recruitment policy.

So, do other assessment practitioners agree that this can happen? If
how could we reasonably discriminate this situation from one in which
is a biased assessment? Or is this not a distinction we would care to
because we hold to a definition of bias that sees it as present whenever
there is differential impact on demographic subgroups?

Your responses are appreciated.

John Ford
Research Psychologist
U.S. Merit Systems Protection Board

IPAC-List at ipacweb.org

IPAC-List at ipacweb.org

IPAC-List at ipacweb.org

More information about the IPAC-List mailing list