[IPAC-List] Calculating costs/savings for testing research

Joel Wiesen wiesen at personnelselection.com
Sun Jan 25 17:15:35 EST 2009


Hi Jamie,

Focusing on utility, as did other answers on this listserv, an
even-handed approach might be in order. Some approaches to evaluating
utility ignore the impact of other common screening tools (work
experience, training, GPA, references, etc.) The incremental benefit of
testing is less than the benefit when compared with random selection.
In short, it is possible to look silly and self-serving by overstating
the usefulness of testing.

You don't mention what types of tests your organization uses, but
adverse impact is a risk, esp with cognitive ability tests, and the gap
in test performance can be twice the gap in job performance.

The cost of an employment discrimination challenge can be high, and
that's if you win. If you lose, it can cost really big bucks. Your
employment practices can affect the likelihood of a court challenge and
of prevailing if challenged, but it may be that a given practice will
increase one likelihood while decreasing the other.

If you emphasize cognitive ability tests, recall that job knowledge may
be a good part of what drives the correlation between cognitive tests
and job performance. (Job knowledge probably also reflects interest and
motivation.)

Also, the more testing becomes universal, the less utility it will have
for a city or the nation, since the supply of superlative candidates is
limited.

Some of these ideas may be found in Wagner, R.K. (1997, Intelligence,
Training, and Employment, American Psychologist, 52, 1059-1069).


Focusing on the exact questions you asked is an interesting exercise.
In effect you asked: What is the likely change (increment?) in utility
of doing a local validity study over a transportability study? What
likely change (increment?) is associated with a change in a cut score?

It may be that utility would go up if one relied on a well developed
test with good transportation evidence, since the cost of developing the
test can be largely side-stepped.

It may be that there is little incremental change in utility with a
small change in the cut score, especially if the cut score is near the
middle of the distribution.

You raise interesting questions! Perhaps consider sharing your answer
to your manager.

G'luck,
Joel


--
Joel P. Wiesen, Ph.D., Director
Applied Personnel Research
27 Judith Road
Newton, MA 02459
(617) 244-8859
http://appliedpersonnelresearch.com




Madigan, Jamie J wrote:

> Hi all,

>

> I've got what may be kind of an odd question, but I'm looking for some

> advice. I've been planning my 2009 projects with my manager, and one of

> the things she's asked to see for each potential item is an analysis of

> its cost and savings. These are projects in the vein of test validation

> updates, setting cut scores, test implementation, validity

> transportation research, audits of testing programs, maintenance of test

> databases, etc.

>

> On the one hand, I think it's great that she thinks in this way. A lot

> of HR managers don't, and as a result they're unable to communicate with

> business line leaders who think in terms of costs, revenue, and savings.

> But on the other hand, I'm a bit flummoxed as to how to translate the

> inputs and outcomes of these kinds of testing projects into dollars and

> other resources. It's admittedly a weak spot in my own skill set, but

> one I'd like to redress.

>

> Anyone got some advice to share here? Any books, articles, websites, or

> other resources?

>

> Jamie Madigan

>

>

>

>





More information about the IPAC-List mailing list