[IPAC-List] MULTILOG Question

Christopher Cerasoli cerasolic at gmail.com
Sun Oct 10 14:25:12 EDT 2010


My question refers to the MULTILOG program for Item Response Theory data.
MULTILOG is a Window's based program that allows one to estimate item and
theta/ability parameters using different models under IRT (e.g., 1PL, 2PL,
Graded, etc.)



Using the Graded model, MULTILOG allows you to specify multiple tests, which
is useful if you want to estimate different abilities/thetas in one set of
data. For example, you could estimate math capabilities, reading
capabilities, and science proficiency without having to enter three separate
datasets individually. Or, you might have a personality inventory with five
parent constructs (e.g., conscientiousness, extraversion, etc.) and wish to
have MULTILOG analyze one dataset containing all responses.



The problem I run into is that while MULTILOG allows you to specify
different tests, with one dataset, the output suggests that it's really only
running one test. I've considered the possibility of common method
variance, but can't imagine that would lead to such striking differences.



Has anyone else experienced this issue?



Thank you in advance,



Chris



~~~~~~~~~~~~~~~~~~~~~~~
Christopher P. Cerasoli
Graduate Assistant
State University of New York
University at Albany
Department of Psychology
Social Science 375
1400 Washington Ave.
Albany, NY 12222
<mailto:cc572532 at albany.edu> cc572532 at albany.edu
~~~~~~~~~~~~~~~~~~~~~~~





More information about the IPAC-List mailing list