[IPAC-List] Does the day you take an oral exam matter? -Killing the Messenger
Mark.Hammer at psc-cfp.gc.ca
Wed Apr 1 16:21:17 EDT 2009
To Dennis' very realistic depiction, I would counter with the following: the amount of distrust is proportional to the weight of the assessment. In other words, the higher the stakes, the greater the perception that there must be something amiss if one was not successful. During my teaching career, I deliberately used multiple sources of evaluation that rose in credit (5%, 10%, 15%) over the semester, and never rose higher than 30%, expressly for the purpose of avoiding that confrontation that seems to invariably come when all the marbles are riding on a single test. Generally, it achieved the objectives.
Is this assumption borne out in the employment testing context?
Just for the heck of it, I just ran some quick correlations on some data I just happened to be working on at the time of this thread. The data set asks 355-381 appointees (and this is important, since there are no unsuccessful people in the set) over a broad range of public service positions and levels about which of some 9 assessment tools were used in their selection, and asks them whether:
1 - the knowledge, skills or abilities you were assessed for were related to the actual job requirements
2 - the assessment methods or tests used provided you an opportunity to demonstrate your capabilities
3 - the assessment methods or tests used were applied in a fair manner
I can add up the number of assessment tools used to get a rough indication of how "high stakes" each test was (although keep in mind that number of tests is not the same as individual weighting).
Much to my surprise, while the responses to the 3 questions above are related to each other in the expected manner (with 1 and 2 being most related), none of the three questions is related to the total number of tools used. Indeed, the only opinion that shows any sort of connection to testing is the relationship between question #2 and whether or not the appointee completed a written knowledge test.... and the correlation was NEGATIVE! (r = -.133, p = .01). That is, those who took a written knowledge test were less likely to feel they had a chance to show their capabilities for the job (huh? what the...?).
There are sources of range restriction galore here, so I won't make too much of it. We also don't ask if they felt the tests were equally fair to all others, just to them personally (as competition winners), and we have no idea of the weighting or even of the difficulty of the tests. And if people took a personality and situational judgment test they only check off one box for "other written tests". It also bears noting that the subject matter of the thread concerns a single occupation, and I'm looking at data that collapses executive appointments, clerical, general labour, policy analysts, IT, chemists, social workers, prison guards, you name it. There may well be patterns beneath the aggregate, but I won't pursue it any further today. I receive data on unsuccessful candidates in about 6 weeks' time, and hope to look at it then. Still, I'm kind of surprised.
All of that being said, I think Don's points are excellent and spot on. Candidate suspicions are based on what they *don't* know more than what they *do* know. Remember that Elizabeth's original query was concerning candidates wanting to make arrangements beforehand, based on naive assumptions about the test, as opposed to the post-hoc opinions I trotted out above. So anything that can fill in the information gaps ahead of time will help to defuse the paranoia.
More information about the IPAC-List