The Test of Construct Validity for One-Factor Model

yahfizham - yahfizham, Irwan Yusti, Muhammad Luthfi Hamzah

Abstract


The test in the field of education is a measuring tool used by educators and to support the process of assessing the abilities of students. This article purpose to examining the construct validity of the instrument questionnaire Cognitive Abilities (CA) with the Computer-Based Test (CBT) model. To check the construct validity of the questionnaire by confirmatory factor analysis (CFA) using LISREL 9.30, with a small sample. The result confirmed that the value of approximate fit (chi-square) = 0.00 ≤ 0.03, degree of freedom (df) = 0 ≤ 2.00, the Root Mean Square Error of Approximation (RMSEA) = 0.000 ≤ 0.08 and p-value 1.00000 > 0.80. Based on the result, showing a better fit of the data with correlated one-factor model or single items or questionnaires (content, construction, and language style). The point of view that the construct validity places more emphasis on how far the questionnaire compiled relates to the theoretically measuring the concepts that have been prepared by asking for experts judgement.

Keywords


construct validity; one-factor model; CBT

References


Aiken, L. R. (1980). Content Validity and Reliability of Single Items or Questionnaires. Educational and Psychological Measurement, 40(4), 955-959.

Beaujean, A. A., & Benson, N. F. (2019). Theoretically-consistent cognitive ability test development and score interpretation. Contemporary School Psychology, 23(2), 126-137.

Berger, J. L., & Karabenick, S. A. (2016). Construct Validity of Self-Reported Metacognitive Learning Strategies. Educational Assessment, 21(1), 19–33.

Blikstein, P., Kabayadondo, Z., Martin, A., & Fields, D. (2017). An Assessment Instrument of Technological Literacies in Makerspaces and FabLabs. Journal of Engineering Education, 106(1), 149–175. https://doi.org/10.1002/jee.20156.

Bloom, B. S. (1956). Taxonomy of educational objectives. Vol. 1: Cognitive domain. New York: McKay.

Boevé, A. J., et al. (2015). Introducing Computer-Based Testing in High-Stakes Exams in Higher Education: Results of a Field Experiment. PLOS ONE, 10(12),1–13.

Brigman, G., et al. (2015). Psychometric Properties and Confirmatory Factor Analysis. Measurement and Evaluation in Counseling and Development, 48(1), 3-4.

Brown, T. A. (2015). Confirmatory Factor Analysis for Applied Research. New York, NY: Guilford Press.

Burcu, M., Alexander, G C., Ng, X., & Harrington, D. (2015). Construct Validity and Factor Structure of Survey-Based Assessment of Cost-Related Medication Burden. Medical Care, 53(2), 199-206. https://doi.org/10.1097/MLR.0000000000000286.

Chang, H. H. (2015). Psychometrics behind computerized adaptive testing. Psychometrika, 80(1), 1-20.

Chiu, Y. H., et al. (2016). Psychometric Properties of the Perceived Stress Scale (PSS): Measurement Invariance Between Athletes and Non-Athletes and Construct Validity. Journal of Life and Environmental Sciences (PeerJ), 1-20.

Cokley, K. (2015). A Confirmatory Factor Analysis of the Academic Motivation Scale with Black College Students. Measurement and Evaluation in Counseling and Development, 48(2), 124-139. https://doi.org/10.1177/0748175614563316.

Cronbach, L. J & Meehl, P. E (1955). Construct Validity in Psychological Tests. Psychological Bulletin, 52(4), 281-302. https://doi.org/10.1037/h0040957.

Duckworth, A. L., & Yeager, D. S. (2015). Measurement Matters: Assessing Personal Qualities Other than Cognitive Ability for Educational Purposes. Educational Researcher, 44(4), 237-251. https://doi.org/10.3102/0013189X15584327.

Embretson, S., & Gorin, J. (2001). Improving Construct Validity with Cognitive Psychology Principles. Journal of Educational Measurement, 38(4), 343-368. https://doi.org/10.2307/1435454.

Han, B (2016). Social Media Burnout: Definition, Measurement Instrument, and Why We Care. Journal of Computer Information Systems, 00(00), 1–9.

Hoogland, K., & Tout, D. (2018). Computer-based assessment of mathematics into the twenty-first century: pressures and tensions. ZDM, 50(4), 675-686.

Huang, H. Y. (2019). Utilizing response times in cognitive diagnostic computerized adaptive testing under the higher‐order deterministic input, noisy ‘and’ gate model. British Journal of Mathematical and Statistical Psychology, 1-33.

Huebner, A., et al. (2018). Factors Affecting the Classification Accuracy and Average Length of a Variable-Length Cognitive Diagnosis Computerized Test. Journal of Computerized Adaptive Testing (IACAT), 6(1), 1-15. https://doi .org/10.7333/2Ficat.V6.155.

Jeong, H. (2014). A Comparative Study of Scores on Computer-Based Tests and Paper-Based Tests. Behaviour & Information Technology, 33(4), 410-422.

Jöreskog, K.G & Sörbom, D. (2006). LISREL 9.30 student version. Scientific Software International, Inc.

Kane, M. T. (2015). Explicating Validity. Assessment in Education: Principles, Policy & Practice,1-15.

Kaplan, M., de la Torre, J., & Barrada, J. R. (2015). New Item Selection Methods for Cognitive Diagnosis Computerized Adaptive Testing. Applied Psychological Measurement, 39(3), 167-188. https://doi.org/10.1177/0146621614554650.

Kaya, Y, & Leite, W. L. (2017). Assessing Change in Latent Skills Across Time With Longitudinal Cognitive Diagnosis Modeling: An Evaluation of Model Performance. Educational and Psychological Measurement,77(3), 369-388.

Kelava, A. (2016). A Review of Confirmatory Factor Analysis for Applied Research (Second Edition). Journal of Educational and Behavioral Statistics, 41(4), 443–447.

Khodeir, N., Elazhary, H., & Wanas, N. (2017). Rule-based Cognitive Modeling and Model Tracing for Symbolization in a Math Story Problem Tutor. International Journal of Emerging and Technology (iJET), 12(4), 111–125. https://doi.org/10.3991/ijet.v12i04.659.

Khoshsima, H., & Hashemi Toroujeni, S. M. (2017). Comparability of Computer-Based Testing and Paper-Based Testing: Testing Mode Effect, Testing mode order, computer attitudes, and testing mode preference. International Journal of Computer (IJC), 24(1), 80-99.

Krumm, G., et al. (2016). Construct Validity and Factorial Invariance Across Sex of the Torrance Test of Creative Thinking–Figural Form A in Spanish-speaking Children. Thinking Skills and Creativity, 22, 180-189. https://doi.org/10.1016/jisc.2th016.100.

Kubiszyn, T., & Borich, G. (2013). Educational Testing and Measurement: Classroom Application and Practice, Tenth Edition. John Wiley & Sons, Inc.

Li, C. H. (2016). Confirmatory Factor Analysis with Ordinal Data: Comparing Robust Maximum Likelihood and Diagonally Weighted Least Squares. Behavior Research Methods, 48(3), 936-949. https://doi.org/10.3758/s1342 8-015-0619-7.

Lin, C. J., & Chang, H. H. (2019). Item Selection Criteria With Practical Constraints in Cognitive Diagnostic Computerized Adaptive Testing. Educational and psychological measurement, 79(2), 335-357.

Lin, et al. (2018). A Discrete Multiobjective Particle Swarm Optimizer for Automated Assembly of Parallel Cognitive Diagnosis Tests. IEEE Transactions on Cybernetics.

Ma, W & de la Torre, J. (2016). A Sequential Cognitive Diagnosis Model for Polytomous Responses. BJM Statistical Psychology, 69, 253-275. https://doi.org/10.1111/bmsp.12070.

Maddocks, D. L. S. (2018). The Identification of Students Who Are Gifted and Have a Learning Disability : A Comparison of Different Diagnostic Criteria. Gifted Child Quarterly, 62(2), 175–192. https://doi.org/10.1177/0016986217752096.

Maier, U., Wolf, N., & Randler, C. (2016). Effects of a Computer-Assisted Formative Assessment Intervention Based on Multiple-tier Diagnostic Items and Different Feedback Types. Computers and Education, 95, 85-98.

Marques-Costa, C., Almiro, P. A., & Simões, M. R. (2018). Computerized cognitive tests (CCT) in elderly: A psychometric review. Revue Europeenne de Psychologie Appliquee, 68(2), 61-68.

Marsh, H, W., et al. (2014). Exploratory Structural Equation Modeling: An Integration of the Best Features of Exploratory and Confirmatory Factor Analysis. Annual Review of Clinical Psychology.

McCormick, J., & Barnett, K. (2011). Teachers' Attributions for Stress and their Relationships with Burnout. International journal of educational management, 25(3).

Messick, S. (1995). Validation of Inferences from Persons' Responses and Performances as Scientific Inquiry Into Score Meaning. The Validity of Psychological Assessment. 50(9), 741–749.

Moore, A. L., & Miller, T. M. (2018). Reliability and validity of the revised Gibson Test of Cognitive Skills, a computer-based test battery for assessing cognition across the lifespan. Psychology research and behavior management, 11, 25-35.

Neill, T. A. O., et.al (2016). Forced-Choice Pre-Employment Personality Assessment: Construct Validity and Resistance to Faking. PAID. https://doi.org/10.1016/j.paid.2016.03.075.

Nyoman, S, I., Ketut E, P, I., & Purnomo, M. H. (2015). Intelligent Classification of Learner’s Cognitive Domain Using Bayes Net, Naïve Bayes, and Utilizing Bloom’s Taxonomy-Based Serious Game. International Journal of Emerging Technologies in Learning, 10(2), 46–52. https://doi.org/10.3991/ijet.v10i1.4451.

Orr, M. T., et al. (2018). The Performance Assessment for Leaders: Construct Validity and Reliability Evidence. Journal of Research on Leadership Education, 13(2), 139-161.

Ouyang, X., Xin, T., & Chen, F. (2016). Construct Validity of the Children’s Coping Strategies Scale (CCSS): A Bifactor Model Approach. Psychological Reports, 118(1),199–218.

O'Neill, et al. (2017). Forced-Choice Pre-Employment Personality Assessment: Construct Validity and Resistance to Faking. Personality and Individual Differences, 115, 120-127. https://doi.org/ 10.1016/j.paid.2016.03.075.

Perry, Adam R. Nicholls, Peter J. Clough & Lee Crust, (2015). Assessing Model Fit: Caveats and Recommendations for Confirmatory Factor Analysis and Exploratory Structural Equation Modeling, Measurement in Physical Education and Exercise Science, 19(10), 12-21. https://doi.org/10.1080/ 1091367X.2014.952370.

Peters, M. A. (2017). Technological Unemployment: Educating for the fourth Industrial Revolution. Educational Philosophy and Theory, 49(1),1-6. https://doi.org/10.1080/00131857.2016.1177412.

Ramirez, T. V. (2016). On Pedagogy of Personality Assessment: Application of Bloom’s Taxonomy of Educational Objectives. Journal of Personality Assessment,1-8.

Richardson, J., Lezzi, A., Khan, M. A., Chen, G, & Maxwell, A. (2016). Measuring the sensitivity and construct validity of 6 utility instruments in 7 disease areas. Medical Decision Making, 36(2), 147-159. https://doi.org/10.1177/0272989X15613522.

Rochefort, C., Baldwin, A. S., & Chmielewski, M. (2018). Experiential Avoidance: An Examination of the Construct Validity of the AAQ-II and MEAQ. Behavior therapy, 49(3). 435-449.

Roediger III, H. L., Putnam, A. L., & Smith, M. A. (2011). Ten Benefits of Testing and Their Applications to Educational Practice. In Psychology of learning and motivation 55, 1-36.

Sellbom, M., Cooke, D. J., & Hart, S. D. (2015). Construct validity of the Comprehensive Assessment of Psychopathic Personality (CAPP) concept map. International Journal of Forensic Mental Health, 14(3), 172-180.

Shuck, B., Adelson, J. L., & Reio Jr, T. G. (2016). The Employee Engagement Scale: Initial Evidence for Construct Validity and Implications for Theory and Practice, Human Resource Management, 1–25. https://doi.org/10.1002/hrm.

Tamboer, P., & Vorst, H. C. (2015). A New Self - Report Inventory of Dyslexia for Students: Criterion and Construct Validity. Dyslexia,21(1), 1-34. https://doi.org/10.1002/dys.1492.

Taylor, S., et al. (2018). Construct validity and responsiveness of the functional Tactile Object Recognition Test for children with cerebral palsy. Australian occupational therapy journal,65(5), 420-430. https://doi.org /10.1111/1440-1630.12508.

Verdugo, M. A., et al. (2016). Confirmatory factor analysis of the supports intensity scale for children. Research in developmental disabilities, 49, 140-152.

Wong, I. H., Denkers, M., Urquhart, N., & Farrokhyar, F. (2015). Construct validity testing of the arthroscopic knot trainer (ArK). Knee Surgery Sports Traumatology: Arthroscopy, 23(3), 906-911. https://doi.org/10.1007/s00167-013-2524-x.

Xu, G (2017). Identifiability of Restricted Latent Class Models with Binary Responses. The Annals of Statistics, 45, 675-707. https://doi.org/10.1214/16-AOS1464.

Zhan, P., Wang, W. C., & Li, X. (2019). A partial mastery, the higher-order latent structural model for polytomous attributes in cognitive diagnostic assessments. Journal of Classification, 1-24.




DOI: http://dx.doi.org/10.23887/jere.v5i3.33285

Article Metrics

Abstract view : 632 times

Refbacks

  • There are currently no refbacks.



Jornal of Education Research and Evaluation (JERE) is published by:

LEMBAGA PENELITIAN DAN PENGABDIAN KEPADA MASYARAKAT (LPPM)

UNIVERSITAS PENDIDIKAN GANESHA

Jl. Udayana, Kampus Tengah, Singaraja-Bali
Kode Pos 81116
Telp. 0362-22928
Homepage: https://lppm.undiksha.ac.id
Email: journaljere@gmail.com

 

Journal of Education Research and Evaluation (JERE) is indexed by:

Akreditasi SINTA 2   Crossref JPI  JPI Undiksha OneSearch  




Creative Commons License

Journal of Education Research and Evaluation (JERE)  is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.