[IPAC-List] Changes in assessment research and practice

Dennis Doverspike dennisdoverspike at gmail.com
Fri Aug 10 09:49:54 EDT 2018

Mike McDaniel is on this listserv, not sure about Frank Schmidt, but Mike
McDaniel or Winfred Arthur could offer an opinion. However, here is my take:

Although the review is great, as far as I know it is not yet published. The
confusing aspect is the review indicates that there is no correction for
reliability in the predictor. However, for some strange reason, this rule
does not appear to have been applied to the unstructured interview. I base
this on:

1. In the article the authors admit that the correlation with job
performance for the unstructured interview may be artificially high because
the correction of range restriction is very high because it includes a
correction for unreliability in the predictor, for interrater reliability.

2. Here is where it gets more confusing, in the footnotes where they
indicate the value comes from the McDaniel study, they seem to indicate
that the estimated correlation has in fact been corrected for unreliability
due to interrater reliability. Mike could probably tell us if the validity
value used is corrected or not corrected. Here is what the footnote says "
The operational validity presented here was corrected for IRR using the
most appropriate meta-analytic reliability estimate for the interview
measure from Huffcutt et al. (2013)." I can give you my opinion, but would
be easier if Mike just weighs in.

3. Of course one way to interpret that is that the unstructured interview
is only less valid because it is so much less reliable than the structured
interview. And of course all this could be seen as getting back to Winfred
Arthur's arguments concerning the fallacy of mixing methods with constructs.


On Fri, Aug 10, 2018 at 9:17 AM Tsugawa, James via IPAC-List <
ipac-list at ipacweb.org> wrote:

> Good morning -
> A year or two ago, I saw an update of the much-cited 1998 Schmidt and
> Hunter metaanalysis of assessment methods.
> One striking result was that unstructured interviews (UIs) fared
> surprisingly well.  The paper at the link below contains the findings of
> interest.
> https://home.ubalt.edu/tmitch/645/articles/2016-100%20Yrs%20Working%20Paper%20for%20Research%20Gate%2010-17.pdf
> So, two questions:
> (1) Was that result further explored or confirmed in other studies?  For
> example, do UIs have previously unseen or underappreciated measurement
> properties?  Or might there be a selection effect at work?  (Given
> standards for metaanalysis and studies, the UIs probably weren't the
> ones of our nightmares.)
> (2) Has it influenced subsequent research--or your own practice, as a
> developer/recommender/user of assessments?
> James Tsugawa / U.S. MSPB
> _______________________________________________________
> IPAC-List
> IPAC-List at ipacweb.org
> https://pairlist9.pair.net/mailman/listinfo/ipac-list

Dennis Doverspike, PhD., ABPP
dennisdoverspike at gmail.com

The information is intended only for the person or entity to which it is
addressed and may contain confidential, privileged and/or a work product
for the sole use of the intended recipient. No confidentiality or privilege
is waived or lost by any errant transmission. If you receive this message
in error, please destroy all copies of it and notify the sender. If the
reader of this message is not the intended recipient, you are hereby
notified that any dissemination, distribution or copying of this
communication is strictly prohibited. In the case of E-mail or electronic
transmission, immediately delete it and all copies of it from your system
and notify the sender. E-mail and fax transmission cannot be guaranteed to
be secure or error-free as information could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or contain viruses.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://pairlist9.pair.net/pipermail/ipac-list/attachments/20180810/18c582df/attachment.html>

More information about the IPAC-List mailing list