[IPAC-List] An application of banding procedures

Brian O'Sullivan bjo at iosolutions.org
Thu Aug 25 10:19:48 EDT 2011

For those interested in challenging banding application, I have an interesting and nuanced scenario that merits some consideration.

My public safety client has been asked by a third party to band test scores from a recent entry-level selection procedure that I have developed (incorporating a cognitive exam, personality inventory and integrity inventory). After several discussions involving a third party with oversight on this process, we have agreed to create fixed bands using SEM banding.

The expert for this third party has weighed in on the mechanics of this banding process and made some very specific recommendations for the city.

The third party's expert has made the following recommendations/requests:

* Calculate the band statistics (i.e., bandwidth) on only a resident portion of this applicant sample (less than ½ of the large sample), as the area residents have absolute preference in the selection process (non-residents will not be considered until the pool of residents is exhausted, which is highly unlikely throughout the tenure of the list). Then, the suggestion was to either not band the non-resident sample or band using the band widths derived from the sub sample(!).

o This recommendation does not appear to have a strong rationale. My opinion in that the band statistics should be calculated for the entire sample and applied either to each group (i.e., residents vs. non-residents), respectively or applied to the entire sample simultaneously with simple distinctions made between residents and non-residents.

* Use 1.65 (one-tail) for the confidence interval when calculating the bandwidth.

o My experience is in using a two-tailed confidence interval (i.e., 1.96), though I've seen both methods used.

* Determine the band width using the composite test score (itself a composite of the 3 tests), calculate the actual bands, determine a "band score" by assigning the mid-point of the band as the "score" for each band and then, add military points as appropriate to the band score (5, 10 points or 15 points, though the vast majority of individuals with military points have only 5 points).

o I would prefer to band after adding military points to the test scores-not beforehand.

o If the suggested method were used, is the mid-point of the band the only reasonable option to use as the band score (e.g., could the top score in the band be used as the band score, especially given the recommendation to use a C = 1.65 in calculating the bandwidth)?

* As mentioned above, a recommendation was made by the third party's expert to add the military points (i.e., 5-15 points) to the "band score" or the mid-point of the band.

o This method also seems controversial as someone who had a "raw score" at the top of a particular band might feel that their military points were being calculated inappropriately (not factored in fully). For example, if the bandwidth was 4.00 and the highest scorer achieved a score of 100.00, our bands would be approximately 88-92, 92-96, and 96-100, with band scores of 90, 94 and 98 respectively. An individual who score a 91.9 and was awarded five military points would have an assigned "band score" of 90 and then a score of 95 after military points were added. This score actually falls in a band score 'grey' area and a decision rule would have to be created to account for these scores. For example, we could create 'new' bands or band segments, though the differences in these band scores would be less than our bandwidth!

My contention is that the "cleaner method" is to first add the military points to the composite test score, and then calculate the bands. This also seems to be more consistent with civil service law in this particular state (though the law does not address banding). One difficulty would result in calculating the reliability of the composite (i.e., which is in itself a composite test score and the military points) since the military points are not "measured." Since the military points are not a "measured" I would assign them a reliability of 1.00 in computing the composite score reliability. That said, since this composite reliability is in itself dependent upon the reliability and variability of the subcomponents, the "high" (perfect) reliability of the military point component would not unduly impact the estimation of composite reliability since the military points account for a miniscule portion of the variance when compared to the test scores (i.e., approximately .03% of the variance of the composite test score).

The argument made against this method (banding a score that was itself a combination of a test score and military points) was that this was tantamount combining "apples & oranges" (I think I've heard this a few times), though I contend this is what is done in nearly every civil service process that requires educational, residency, seniority or military points (not standardized points or percentage points) be added to a final test score. I see little difference.

Of course, there are more details involved, but this rendering represents the major issues involved in this scenario. Any thoughts/comments on this banding scenario? I appreciate thoughts, comments or input.


Brian J. O'Sullivan

Director of Consulting

Industrial/Organizational (I/O) Solutions, Inc.

1127 S. Mannheim Rd. Ste 203

Westchester, IL 60154

p. 888.784.1290 f. 708.410.1558

www.iosolutions.org <http://www.iosolutions.org> and www.publicsafetyrecruitment.com <http://www.publicsafetyrecruitment.com>

More information about the IPAC-List mailing list