Evaluating Instrument Quality: Rasch Model – Analyses of Post Test of Curriculum 2013 Training

  • Komalasari Lembaga Penjaminan Mutu Pendidikan
Keywords: curriculum 2013, post test, rasch analysis, training

Abstract

The main purpose of this study was to evaluate the quality of post test utilized by LPMP Central Kalimantan Indonesia in curriculum 2013 training for X grade teachers. It uses Rasch analysis to explore the item fit, the reliability ( item and person), item difficulty, and the Wrigh map of post test. This study also applies Classical Test Teory (CTT) to determine item discrimination and distracters. Following a series of iterative Rasch analyses that adopted the “data should fit the model” approach, 30 items post test of curriculum 2013 training was analyzed using Acer Conquest 4 software, software based on Rasch measurement model. All items of post test of curriculum 2013 training are sufficient fit to the Rasch model. The difficulty levels (i.e. item measures) for the 30 items range from –1.746 logits to +1.861 logits. The item separation reliability is acceptable at 0.990 and person separation reliability is low at 0.485. The wright map indicates that the test is difficult for the teachers or the teachers have low ability in knowledge of curriculum 2013. The post test items cannot cover all the ranges of the teachers’ ability levels. Items discrimination of post test of curriculum 2013 training grouped into fair discrimination (item 2, 4, 5, 8, 11, 18) and poor discrimination (1, 3, 6, 7, 9, 10,12, 13, 14, 15, 16, 17, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30). Some distracters from item 1, 2, 6, 7, 8, 9, 11, 13, 16, 17, 18, 19, 20, 22, 24, 25, 27, 28, 29 and 30 are problematic. These distracters require further investigation or revision.

Downloads

Download data is not yet available.

Author Biography

Komalasari, Lembaga Penjaminan Mutu Pendidikan

Lembaga Penjaminan Mutu Pendidikan, Kalimantan Tengah, Indonesia

References

Allen, MJ., & Yen, W.M. (1979). Introduction to measurement theory. Monterey California: Brooks/Cole Publishing Company.

Bond, TG., & Fox, CM. (2015). Applying the Rasch Model Fundamental Measurement in the Human Sciences. New York : Routledge

McGahee, T.W. & Ball, J. (2009). How to read and really use an item analysis. Nurse Educator, 34, 166-171.

Neumanna, I., Neumannb,K., and Nehmc,R. (2011) Evaluating Instrument Quality in Science Education: Rasch-based analyses of a Nature of Science test. International Journal of Science Education Vol. 33, No. 10, 1 July 2011, pp. 1373–1405

Nornazira S., Aede H M., Nor Fadila M A and Adibah AL. Reliability and Validity Evidence of Instrument Measuring Competencies for Superior Work Performance. Rasch Report. Measurement Symposium (PROMS) 2015 Conference Proceedings, DOI 10.1007/978-981-10-1687-5_21.

Penn, B.K. (2009, August). Test item development and analysis. Presented at Creighton University School of Nursing Faculty Retreat, Omaha, NE.

Rahma Zulaiha. (2012). Writtent test. Jakarta:Puspendik.

Saifuddin Azwar. (1992). Reliability and validity.(3rd ed). Yogyakarta: Pustaka Pelajar.

The Technical Guidance and Assistance Guidance of Curriculum Implementation 2013 Senior High School 2017. (2017). Ministry of Education and Culture of RI

Published
2018-06-30
How to Cite
[1]
Komalasari 2018. Evaluating Instrument Quality: Rasch Model – Analyses of Post Test of Curriculum 2013 Training. Jurnal Ilmiah Kanderang Tingang. 9, 1 (Jun. 2018), 67-86. DOI:https://doi.org/10.37304/jikt.v9i1.7.