[IPAC-List] Wall Street journal article on SAT coaching courses

Dennis Doverspike dd1 at uakron.edu
Fri May 22 11:39:16 EDT 2009

I am a common sense kind of guy, who likes to nutshell. I believe you can
reduce the literature to the following:

Coaching does have an effect on test performance but not much different than
schooling itself, which of course could be seen as a type of ultimate test

So if you want your child to do well on the SAT:

1. Have them pay attention and learn a lot in school.
2. Have them read a lot outside of school.
3. Have them write a lot outside of school.
4. Give them a good breakfast (guess I watch too many commercials).

Having said that, the problem I have with a lot of commercial coaching
programs is that they teach the student that doing well on a test is all
about tricks and learning the tricks. That is the worst message you can
give. Doing well on a test is all about studying hard, being prepared, and
putting in a lot of effort. It is not about learning tricks.

Dennis Doverspike, Ph.D., ABPP
Professor of Psychology
Director, Center for Organizational Research
Senior Fellow of the Institute for Life-Span Development and Gerontology
Psychology Department
University of Akron
Akron, Ohio 44325-4301
330-972-8372 (Office)
330-972-5174 (Office Fax)
ddoverspike at uakron.edu

The information is intended only for the person or entity to which it is
addressed and may contain confidential, privileged and/or a work product for
the sole use of the intended recipient. No confidentiality or privilege is
waived or lost by any errant transmission. If you receive this message in
error, please destroy all copies of it and notify the sender. If the reader
of this message is not the intended recipient, you are hereby notified that
any dissemination, distribution or copying of this communication is strictly
prohibited. In the case of E-mail or electronic transmission, immediately
delete it and all copies of it from your system and notify the sender.
E-mail and fax transmission cannot be guaranteed to be secure or error-free
as information could be intercepted, corrupted, lost, destroyed, arrive late
or incomplete, or contain viruses.

-----Original Message-----
From: ipac-list-bounces at ipacweb.org [mailto:ipac-list-bounces at ipacweb.org]
On Behalf Of Mark Hammer
Sent: Friday, May 22, 2009 9:15 AM
To: ipac-list at ipacweb.org
Subject: Re: [IPAC-List] Wall Street journal article on SAT coaching courses

Both outcomes can be true;. Coaching can have little effect, AND practice
and/or coaching can have moderately powerful effects.

The basic question one has to ask is "What could a person, and more
specifically THIS person, learn during coaching and/or practice?". The
answer is: a buncha stuff. And it would depend on what the nature of the
test is (and the cognitive ability tests that Hausknecht et al. looked at
likely show different benefit than might an in-basket), and the current
testee characteristics.

If I've done all manner of tests with modest success - multiple choice,
written, oral presentations to selection panels, etc. - and have a
reasonable amount of recent experience under my belt, then what I get from
coaching or a retest is much more focussed on the specific content of the
test, and perhaps things like test-specific time management strategies. If
I'm a 17 year-old who freezes at the thought of a test, or who lacks any of
the metacognitive strategicness they will have acquired in another 6 years,
then what I get out of coaching or retests is likely somewhat different, and
may lie more in the domain of simply deflecting negative self-comments
during test-taking (all that Carol Dweck "entity theory" stuff, which is
fascinating and a highly recommended read).

When I used to teach reasonable-sized classes, where written answers were
feasible, I would often pass out little "metacognitive prosthetic" cards to
each student to park on their desk before an exam. The cards contained a
half dozen or so questions for the student to ask themselves, and I told
students before the exam started that these were the sorts of questions that
A+ students *always* asked themselves throughout, and B or C students asked
themselves only now and then. The advice was to consult the card and let it
assist you in building up those A+ habits. The questions were things like:
"Did I answer the question that was asked?", "Did I say what I wanted to
say?", "Could someone not in this course understand what I wrote?", etc.
All essentially self-monitoring queries that nudged the testee towards
effective calibration of effort and product.

These are the sorts of metacognitive advances that develop over the course
of many tests. I have no illusions about students looking at such a card on
one occasion and suddenly leap-frogging ahead in their performance.
However, the idea of pausing to reflect, monitor one's performance, and
re-calibrate as necessary, has to start somewhere. That sort of learning,
though, I would expect to take much longer than what transpires in a
test-retest interval, a 2 month prep workshop, or whatnot. And as much of a
role as it might play in optimum performance in ANY test, it is different
from the many other things a person might acquire over practice tests,
re-tests, workshop, or simply applying to things over and over again.

One of the things I learned during my thousands of hours "running rats" (ah,
the sweet musty poopy smell of science!) was to always ask "What *could*
they learn here?". What I tried to get them to learn was not always what
they learned nor the full extent of what they learned. I think the same
question must be asked when one examines practice and/or coaching effects.
What is it we *think* they will learn, and what DO they learn? To my mind,
the interactions with both test-type, age, and test experience will likely
be significant, as will the effects of duration of coaching or extent of

Mark Hammer

>>> "Winfred Arthur, Jr." <w-arthur at neo.tamu.edu> 2009/05/21 10:03 pm >>>

relatedly, the Hausknecht et al. (2007, JAP, 92, 373-385, "Retesting in
selection: A meta-analysis of coaching and practice effects for test of
cognitive ability") meta-analysis might be equally informative. their
abstract reads as follows:

"Previous studies have indicated that as many as 25% to 50% of
applicants in organizational and educational settings are retested with
measures of cognitive ability. Researchers have shown that practice
effects are found across measurement occasions such that scores improve
when these applicants retest. In this study, the authors used
meta-analysis to summarize the results of 50 studies of practice effects
for tests of cognitive ability. Results from 107 samples and 134,436
participants revealed an adjusted overall effect size of .26. Moderator
analyses indicated that effects were larger when practice was
accompanied by test coaching and when identical forms were used.
Additional research is needed to understand the impact of retesting on
the validity inferences drawn from test scores."

- winfred
IPAC-List at ipacweb.org

More information about the IPAC-List mailing list