[IPAC-List] Measurement education
cymeyers at yahoo.com
cymeyers at yahoo.com
Thu Oct 25 12:50:13 EDT 2012
Do you know the rank-to-score correlation (Spearmann)? Assuming it's not expected to be 100% (because then there's no need for the test!), where is the mis-match between the eligibles and score performance? While you can always try to increase test validity, it seems to me that the more fundamental issues you're dealing with are:
1) Conveying that different selection steps do not measure the same things. All steps (assuming they are valid) taken together are what yield the best candidates. The perception that the test result should simply rubber stamp or "whittle down" the eligibility list is an unfortunate and common error..
2) What is going into your eligibility list? If it is merely a seniority or beauty contest - well, garbage in-garbage out. But assuming you have no control over this step, yes, you will have to ensure that the downstream procedures are robust and, correctly, will not "match" with the eligibility list. Explaining this is never easy, but later job performance measures of those selected under a procedure can help to prove your point.
3) Another error is that people assume that if you were successful at the front-line job, you will necessarily be successful in the higher level job. False, and there's plenty of literature regarding this.
Finally, one area where you may have more control and is sometimes overlooked as a true selection step is the job posting (and an accurately detailed job description). Ensure that both properly convey the nature of the job - the good and the bad. Ensure it isn't watered down. Aim for what is really needed in the job. Helping potential applicants at step one to properly determine their own qualifications and interest in performing the work is critical. For certain types of job including promotions, I encourage using an applicant orientation. This can be attended live or video-streamed, or both. But attendance should be required. It's a good way for leaders and those already in the job to explain what's really expected (again, the ups and downs) and to field questions from the applicants.
Sent via BlackBerry from T-Mobile
From: "Partain, Steven C." <Steven.Partain at tvfr.com>
Sender: ipac-list-bounces at ipacweb.org
Date: Wed, 24 Oct 2012 23:26:52
To: ipac-list at ipacweb.org<ipac-list at ipacweb.org>
Subject: [IPAC-List] Measurement education
Folks, I am facing a bit of a crisis of understanding related to measurement in promotional exams. Every-so-often we have a promotional exam in which the names and ranking of eligibles on the list don't match what our folks know about those candidates. We've had several recently with the "best" people failing the exams. As you might imagine, HR is to blame, and we are under great pressure to change our approach to exams to ensure the "best" people pass and are appropriately ranked. I won't go through all the practices we use to ensure validity, reliability, standardization, etc. We certainly are always looking at those factors and have room to improve. But the underlying message is to make our exams "more successful," which means that the resulting eligible list matches the perceptions of our workforce about their true ability.
So, here's my question. I feel pretty well-versed in the folly of holistic assessments, the relatively low validity of others "sizing up" candidates intuitively, etc. I have attempted-and obviously failed-to convey some of the science underlying this. How have others successfully overcome this challenge? Are there metaphors that have worked? A written piece published that captures the issue in laymen's terms?
Any help is appreciated. Otherwise, I fear we will head down the road of having the workforce rank candidates-kind of a popularity contest.
Tualatin Valley Fire & Rescue
11945 SW 70th Avenue, Tigard, Oregon 97223
IPAC-List at ipacweb.org
More information about the IPAC-List