CINXE.COM
How Valid Are Our Language Test Interpretations A Demonstrative Example
<?xml version="1.0" encoding="UTF-8"?> <article key="pdf/14112" mdate="2010-07-23 00:00:00"> <author>Masoud Saeedi and Shirin Rahimi Kazerooni and Vahid Parvaresh</author> <title>How Valid Are Our Language Test Interpretations A Demonstrative Example</title> <pages>1587 - 1598</pages> <year>2010</year> <volume>4</volume> <number>7</number> <journal>International Journal of Psychological and Behavioral Sciences</journal> <ee>https://publications.waset.org/pdf/14112</ee> <url>https://publications.waset.org/vol/43</url> <publisher>World Academy of Science, Engineering and Technology</publisher> <abstract>Validity is an overriding consideration in language testing. If a test score is intended for a particular purpose, this must be supported through empirical evidence. This article addresses the validity of a multiplechoice achievement test (MCT). The test is administered at the end of each semester to decide about students&amp;39; mastery of a course in general English. To provide empirical evidence pertaining to the validity of this test, two criterion measures were used. In so doing, a Cloze test and a Ctest which are reported to gauge general English proficiency were utilized. The results of analyses show that there is a statistically significant correlation among participants&amp;39; scores on the MCT, Cloze, and Ctest. Drawing on the findings of the study, it can be cautiously deduced that these tests measure the same underlying trait. However, allowing for the limitations of using criterion measures to validate tests, we cannot make any absolute claim as to the validity of this MCT test. </abstract> <index>Open Science Index 43, 2010</index> </article>