[IPAC-List] Appealed Items

Reed, Elizabeth EReed1 at Columbus.gov
Tue Oct 18 13:28:59 EDT 2016

Lots of great discussion on this topic!

In Columbus, as with many Civil Service Commissions, we write test questions from a list of reading material given to promotional candidates for police and fire jobs. Candidates have the right to review the key and appeal the keyed response.

When drafting the questions we try to create various levels of items: 1) recalling, recognizing, identifying 2) defining, comparing, associating, classifying, and 3) explaining and interpreting. I wouldn't characterize the higher levels as situational judgment, but there is certainly judgment and interpretation involved. 

On occasion, what was intended to be a distractor is in fact correct and by SMEs' opinion may be equally or more correct than the original keyed alternative. Our process currently allows candidates to argue for items to be tossed or re-keyed. We have not allowed for double-keying of items. But, I'm not sure why that is. It's a practice that predates my time here at the City and I've been here over 20 years.

Based upon what I'm hearing I'm wondering if how we use our scores in the end may be part to the reason. Many Commissions require a minimum passing score--based on raw number or % of correct responses to continue in the process. In such cases giving credit to all or double-keying makes sense. Maybe some jurisdictions use Angoff to determine passing scores and that impacts the preference to allow for double keyed items.  We use z-scoring, so the cut score (or score to be combined with other phases) is based upon how candidates compare to the group. This discussion makes me wonder if that had an impact on the decision not to allow for double-keyed answers, rather we delete when there are two correct responses.


Elizabeth A. Reed
Public Safety Assessment Team Manager
Columbus Civil Service Commission

Direct: 614.645.6032

-----Original Message-----
From: IPAC-List [mailto:ipac-list-bounces at ipacweb.org] On Behalf Of mhammer at 295.ca
Sent: Tuesday, October 18, 2016 9:05 AM
To: ipac-list at ipacweb.org
Subject: Re: [IPAC-List] Appealed Items

1) Safe to assume we are talking about a situational judgment test here? 
Perhaps I didn't read previous posts closely enough, but it didn't seem to be mentioned explicitly.

2) When I used to teach, I stumbled across some papers in the journal Teaching of Psychology that examined what they referred to as the "answer justification option" for multiple-choice tests.  That option permitted students to provide a brief rationale for why they selected the answer they did.  If the justification displayed some pertinent knowledge of the subject matter, the student could receive partial or full credit for the item, even though the opscan would show it as an error.  The authors indicated that, while it did not tend to change grades received in any substantive way, it resulted in greater perceived fairness of the tests.

I used it, and observed that very pattern.  While none of the multiple-choice tests I gave could be considered speed tests, still I encouraged students to be judicious in their use of the option, because writing things out by hand takes time; time that might be better allocated to going over their answers and verifying their choices.  I found about 10% of my students used the option, used it for about 10% of the items, and only about 10% of those items needed some scrutiny on my part (i.e., they had written justifications for something that would have been scored as correct anyway).

What it also bought me was the avoidance of having to use the dreaded phrase "If I do it for you, I'd have to do it for everyone".  Here, it allowed their answer to be scored in a unique manner that did not obligate crediting all other unjustified "wrong answers".  Saved me from hours of irritating office visits from keeners desperate to get that one extra point.

While I think this presented an elegant solution to my own scenario, I suspect volume is an important factor to consider.  It was no big deal for me to assess 100 justifications, several times a year. YMMV as they say. 
besides, the gist of this thread is what to do *after* the fact.  However, I offer this option up as a way of sidestepping appeals, and problems of this type.

3) SJTs are a curious beast, as I learned some 17 years ago, when our unit experimented with one, in addition to a translated version.  Performance on the translated version was a full S.D. below the nontranslated.  The publisher had arranged for the translation, and when I inquired as to whether the translator had access to the answer key during translation, the reply was "No. They translated it line by line, and we have always been happy with their service."

Trouble is, SJTs can include subtle verbal cues as to what is a better and worse response choice.  If the situation described is ill-defined or ambiguous, any response-choice commencing with "You should..." is instantly a poorer choice than something commencing with "You could...". 
It is the complementarity of the stem and phrasing of the choices that can dictate what an optimal response consists of.  We learned this the hard way, when we observed that the translated version had neglected to pay attention to these cues.  Those taking the translated version headed for a particular "wrong" answer in droves, and avoided the correct one like the plague.

The lesson here is to always be mindful of phrasing in SJT construction, so as to avoid appeals of the type that began this thread.

Mark Hammer

IPAC-List at ipacweb.org

More information about the IPAC-List mailing list