[IPAC-List] Measurement education

Pritchard, Ken Ken.Pritchard at MWAA.com
Thu Oct 25 11:31:36 EDT 2012

I like the "negatives and positives" - thx

-----Original Message-----
From: ipac-list-bounces at ipacweb.org [mailto:ipac-list-bounces at ipacweb.org] On Behalf Of Reed, Elizabeth
Sent: Thursday, October 25, 2012 10:56 AM
To: ipac-list at ipacweb.org
Subject: Re: [IPAC-List] Measurement education


I noticed that you are working with fire and rescue personnel; from my experiences fire personnel are definitely an interesting group. When I first joined Columbus over 15 years ago the tensions between Fire and Civil Service were tense, primarily for the reasons that you state. I took the approach of listening. My goal was to find out what we were missing in the assessment process. At the time I heard this: "the people at the top of the list are book smart, " they cannot apply their knowledge to the real world.

Over the course of the last decade we have completely restructured our assessment process in the fire promotional ranks. It was 25% open-book multiple-choice, 25% closed-book multiple-choice, 25% written work sample and 25% oral board for all promotional ranks. It now looks like this:

Fire Lieutenant and Fire Captain: 25% closed-book multiple-choice (must pass), 50% tactical exercise-video scenarios with written response (must pass), 25% oral board.
Fire Battalion Chief: 25% closed-book multiple-choice, 50% tactical exercise-video scenarios with verbal interactive responses (must pass), 25% oral board.
And new this year for Fire Deputy Chief: 25% written work sample (planning problem), 50% tactical exercise-video scenarios with verbal interactive responses (must pass this phase) 25% oral board. Prior to this year the deputy chief had the multiple-choice instead of a written work sample.

Notice the reduced focus on multiple-choice. That took some work; after all they saw the multiple-choice as objective and fair.

Through these changes, division and union personnel saw us as responsive and those finishing at the top of the list were making more sense to them.

Since they are fire personnel and most have basic medical training, I often made the conversation of ranking about false negatives and false positives. They know from medical training that even the best of tests have false positives and false negatives. I would explain that our goal is to reduce both the false negatives and the false positives--but no test is a perfect predictor. This made sense to them.

As the results of our assessments improved--that is--generally better candidates were finishing at the top of the list, then trust in the process improved. At times candidates appeared at the top that were a surprise. They promoted hin/her anyway and found that they had an excellent supervisor/manager that they would not have noticed had it not been for our process. I'm not saying that we don't still have some false positives and false negatives--but the numbers have been minimized.


Elizabeth A. Reed
Public Safety Assessment Team Manager
Columbus Civil Service Commission

Direct: 614.645.6032

-----Original Message-----
From: ipac-list-bounces at ipacweb.org [mailto:ipac-list-bounces at ipacweb.org] On Behalf Of Partain, Steven C.
Sent: Wednesday, October 24, 2012 7:27 PM
To: ipac-list at ipacweb.org
Subject: [IPAC-List] Measurement education

Folks, I am facing a bit of a crisis of understanding related to measurement in promotional exams. Every-so-often we have a promotional exam in which the names and ranking of eligibles on the list don't match what our folks know about those candidates. We've had several recently with the "best" people failing the exams. As you might imagine, HR is to blame, and we are under great pressure to change our approach to exams to ensure the "best" people pass and are appropriately ranked. I won't go through all the practices we use to ensure validity, reliability, standardization, etc. We certainly are always looking at those factors and have room to improve. But the underlying message is to make our exams "more successful," which means that the resulting eligible list matches the perceptions of our workforce about their true ability.

So, here's my question. I feel pretty well-versed in the folly of holistic assessments, the relatively low validity of others "sizing up" candidates intuitively, etc. I have attempted-and obviously failed-to convey some of the science underlying this. How have others successfully overcome this challenge? Are there metaphors that have worked? A written piece published that captures the issue in laymen's terms?

Any help is appreciated. Otherwise, I fear we will head down the road of having the workforce rank candidates-kind of a popularity contest.


Steven Partain
HR Manager
Human Resources
Tualatin Valley Fire & Rescue
11945 SW 70th Avenue, Tigard, Oregon 97223 www.tvfr.com<http://www.tvfr.com/>
Ph. 503-259-1292

IPAC-List at ipacweb.org
IPAC-List at ipacweb.org

More information about the IPAC-List mailing list