The Test of Construct Validity for One-Factor Model
DOI:
https://doi.org/10.23887/jere.v5i3.33285Keywords:
construct validity, one-factor model, CBTAbstract
The test in the field of education is a measuring tool used by educators and to support the process of assessing the abilities of students. However, all items contained in the instrument need to be tested. This study aims to test the construct validity of the Cognitive Abilities (CA) questionnaire instrument with the Computer-Based Test (CBT) model. The research sample of this study consists of 31 students. The method used to collect data is a questionnaire. The instrument used in collecting data is a questionnaire. The assessment of the questionnaire instrument was carried out by five experts who were invited as validators in the process of the Focus Group Discussion (FGD) technique. Checking the construct validity of the questionnaire was carried out by confirmatory factor analysis (CFA) using LISREL 9.30, with a small sample. The result confirmed that the value of approximate fit (chi-square) = 0.00 ≤ 0.03, degree of freedom (df) = 0 ≤ 2.00, the Root Mean Square Error of Approximation (RMSEA) = 0.000 ≤ 0.08 and p-value 1.000.000 > 0.80. Based on the results, the data shows a better fit with the correlated one-factor model or single item or questionnaire (content, construction, and language style). The point of view that construct validity emphasizes more on how far the questionnaire compiled is related to the theoretical measurement of the concepts that have been prepared by asking for expert judgment.
References
Aiken, L. R. (1980). Content validity and reliability of single items or questionnaires. Educational and Psychological Measurement, 40(4), 955–959. https://doi.org/10.1177/001316448004000419.
Alla Belousova, V. P. (2015). Technique of Thinking Style Evaluating. International Journal of Cognitive Research in Science, Engineering and Education (IJCRSEE), 3(2), 1–8. https://doi.org/10.23947/2334-8496-2015-3-2-1-8.
Berger, J. L., & Karabenick, S. A. (2016). Construct Validity of Self-Reported Metacognitive Learning Strategies. Educational Assessment, 21(1), 19–33. https://doi.org/10.1080/10627197.2015.1127751.
Blikstein, P., Kabayadondo, Z., Martin, A., & Fields, D. (2017). An Assessment Instrument of Technological Literacies in Makerspaces and FabLabs. 106(1), 149–175. https://doi.org/10.1002/jee.20156.
Boevé, A. J., Meijer, R. R., Albers, C. J., Beetsma, Y., & Bosker, R. J. (2015). Introducing Computer-Based Testing in High- Stakes Exams in Higher Education : Results of a Field Experiment. 1–13. https://doi.org/10.1371/journal.pone.0143616.
Brigman, G., Wells, C., Webb, L., Villares, E., Carey, J. C., & Harrington, K. (2015). Psychometric properties and confirmatory factor analysis of the student engagement in school success skills. Measurement and Evaluation in Counseling and Development, 48(1), 3–14. https://doi.org/10.1177/0748175614544545.
Bromberek-Dyzman, K., Jankowiak, K., & Chełminiak, P. (2021). Modality matters: Testing bilingual irony comprehension in the textual, auditory, and audio-visual modality. Journal of Pragmatics, 180. https://doi.org/10.1016/j.pragma.2021.05.007.
Burcu, M., Alexander, G. C., & Ng, X. (2015). Construct Validity and Factor Structure of Survey-based Assessment of Cost-related Medication Burden. National Library of Medicine, 53(2), 199–206. https://doi.org/10.1097/MLR.0000000000000286.
Calvo, S., Morales, A., & Wade, J. (2019). The use of MOOCs in social enterprise education: an evaluation of a North–South collaborative FutureLearn program. Journal of Small Business and Entrepreneurship, 31(3), 201–223. https://doi.org/10.1080/08276331.2018.1453241.
Cokley, K. (2015). A confirmatory factor analysis of the Academic Motivation Scale with Black college students. Measurement and Evaluation in Counseling and Development, 48(2), 124–139. https://doi.org/10.1177/0748175614563316.
Duckworth, A. L., & Yeager, D. S. (2015). Measurement Matters: Assessing Personal Qualities Other Than Cognitive Ability for Educational Purposes. May, 237–251. https://doi.org/10.3102/0013189X15584327.
Guille, M., Arias, B., Vicente, E., Verdugo, M. A., & Badia, M. (2016). Research in Developmental Disabilities Confirmatory factor analysis of the supports intensity scale for children. 50, 140–152. https://doi.org/10.1016/j.ridd.2015.11.022.
Han, B., & Han, B. (2016). Social Media Burnout : Definition , Measurement Instrument , and Why We Care Social Media Burnout : Definition , Measurement Instrument , and Why We Care. Journal of Computer Information Systems, 00(00), 1–9. https://doi.org/10.1080/08874417.2016.1208064.
Hayton, J. C., Biron, M., & Christiansen, L. C. (2012). Book review. 51(3), 457–459. https://doi.org/10.1002/hrm.
Heong, Y. M., Othman, W. B., Yunos, J. B. M., Kiong, T. T., Hassan, R. Bin, & Mohamad, M. M. B. (2011). The Level of Marzano Higher Order Thinking Skills among Technical Education Students. International Journal of Social Science and Humanity, 1(2), 121–125. https://doi.org/10.7763/ijssh.2011.v1.20.
Huebner, A., Finkelman, M., & Weissman, A. (2018). Factors Affecting the Classification Accuracy and Average Length of a Variable-Length Cognitive Diagnostic Computerized Test. Journal of Computerized Adaptive Testing, 6(1), 1–14. https://doi.org/10.7333/1802-060101.
Jeong, H. (2014). A comparative study of scores on computer-based tests and paper-based tests. Behaviour and Information Technology, 33(4), 410–422. https://doi.org/10.1080/0144929X.2012.710647.
Kane, M. T. (2015). Explicating validity Explicating validity. December. https://doi.org/10.1080/0969594X.2015.1060192.
Kaplan, M., de la Torre, J., & Barrada, J. R. (2015). New Item Selection Methods for Cognitive Diagnosis Computerized Adaptive Testing. Applied Psychological Measurement, 39(3), 167–188. https://doi.org/10.1177/0146621614554650.
Kaya, Y., & Leite, W. L. (2017). Assessing Change in Latent Skills Across Time With Longitudinal Cognitive Diagnosis Modeling: An Evaluation of Model Performance. Educational and Psychological Measurement, 77(3), 369–388. https://doi.org/10.1177/0013164416659314.
Kelava, A. (2016). A Review of Confirmatory Factor Analysis for Applied Research ( Second Edition ). XX(X), 1–5. https://doi.org/10.3102/1076998616631747.
Khoshsima, H., Morteza, S., Toroujeni, H., & Tefl, M. A. I. (2017). Comparability of Computer-Based Testing and Paper-Based Testing: Testing Mode Effect, Testing Mode Order, Computer Attitudes and Testing Mode preference. International Journal of Computer (IJC) International Journal of Computer (IJC, 24(1), 80–99.
Krumm, G., Arán Filipppetti, V., Lemos, V., Koval, J., & Balabanian, C. (2016). Construct validity and factorial invariance across sex of the Torrance Test of Creative Thinking – Figural Form A in Spanish-speaking children. Thinking Skills and Creativity, 22, 180–189. https://doi.org/10.1016/j.tsc.2016.10.003.
Li, C. H. (2016). Confirmatory factor analysis with ordinal data: Comparing robust maximum likelihood and diagonally weighted least squares. Behavior Research Methods, 48(3), 936–949. https://doi.org/10.3758/s13428-015-0619-7.
Lin, Y., Jiang, Y., Gong, Y., Zhan, Z., & Zhang, J. (2018). A Discrete Multiobjective Particle Swarm Optimizer for Automated Assembly of Parallel Cognitive Diagnosis Tests. IEEE Transactions on Cybernetics, PP, 1–14. https://doi.org/10.1109/TCYB.2018.2836388.
Ma, W., & de la Torre, J. (2016). A sequential cognitive diagnosis model for polytomous responses. The British Journal of Mathematical and Statistical Psychology, 69(3), 253–275. https://doi.org/10.1111/bmsp.12070.
Maddocks, D. L. S. (2018). The Identification of Students Who Are Gifted and Have a Learning Disability : A Comparison of Different Diagnostic Criteria. https://doi.org/10.1177/0016986217752096.
Maier, U., Wolf, N., & Randler, C. (2016). Effects of a computer-assisted formative assessment intervention based on multiple-tier diagnostic items and different feedback types. Computers and Education, 95, 85–98. https://doi.org/10.1016/j.compedu.2015.12.002.
Marsh, H. W., Morin, A. J. S., Parker, P., & Kaur, G. (2014). Exploratory Structural Equation Modeling: An Integration of the Best Features of Exploratory and Confirmatory Factor Analysis. Ssrn, November, 1–26. https://doi.org/10.1146/annurev-clinpsy-032813-153700.
Mastroleo, N. R., Humm, L., Williams, C. M., & Kiluk, B. D. (2020). Initial Testing of a Computer-Based Simulation Training Module to Support Clinicians’ Acquisition of CBT Skills for Substance use Disorder Treatment. Journal of Substance Abuse Treatment, 114. https://doi.org/10.1016/j.jsat.2020.108014.
Mokshein, S. E., Ishak, H., & Ahmad, H. (2019). the Use of Rasch Measurement Model in English Testing. Jurnal Cakrawala Pendidikan, 38(1), 16–32. https://doi.org/10.21831/cp.v38i1.22750.
Neill, T. A. O., Lewis, R. J., Law, S. J., Larson, N., Hancock, S., Radan, J., Lee, N., & Carswell, J. J. (2016). Forced-choice pre-employment personality assessment : Construct validity and resistance to faking. PAID. https://doi.org/10.1016/j.paid.2016.03.075.
Nyoman Sukajaya, I., Ketut Eddy Purnama, I., & Purnomo, M. H. (2015). Intelligent classification of learner’s cognitive domain using bayes net, naïve bayes, and j48 utilizing bloom’s taxonomy-based serious game. International Journal of Emerging Technologies in Learning, 10(2), 46–52. https://doi.org/10.3991/ijet.v10i1.4451.
Orr, M. T., Pecheone, R., Hollingworth, L., Beaudin, B., Snyder, J., & Murphy, J. (2017). The Performance Assessment for Leaders : Construct Validity and Reliability Evidence. https://doi.org/10.1177/1942775117742646.
Ouyang, X., Xin, T., & Chen, F. (2016). Construct Validity of the Children ’ s Coping Strategies Scale ( CCSS ): A Bifactor Model Approach. https://doi.org/10.1177/0033294116628362.
Perry, J. L., Nicholls, A. R., Clough, P. J., Crust, L., & Perry, J. L. (2014). Measurement in Physical Education and Exercise Science Assessing Model Fit : Caveats and Recommendations for Confirmatory Factor Analysis and Exploratory Structural Equation Modeling Assessing Model Fit : Caveats and Recommendations for Con fi rmatory Fac. February 2015, 37–41. https://doi.org/10.1080/1091367X.2014.952370.
Peters, M. A. (2017). Technological unemployment: Educating for the fourth industrial revolution. Educational Philosophy and Theory, 49(1), 1–6. https://doi.org/10.1080/00131857.2016.1177412.
Ramirez, T. V. (2016). On Pedagogy of Personality Assessment : Application of Bloom ’ s Taxonomy of Educational Objectives On Pedagogy of Personality Assessment : Application of Bloom ’ s Taxonomy of. 3891(June). https://doi.org/10.1080/00223891.2016.1167059.
Rapih, S., & Sutaryadi, S. (2018). Perpektif guru sekolah dasar terhadap Higher Order Tinking Skills (HOTS): pemahaman, penerapan dan hambatan. Premiere Educandum : Jurnal Pendidikan Dasar Dan Pembelajaran, 8(1), 78. https://doi.org/10.25273/pe.v8i1.2560.
Richardson, J., Iezzi, A., Khan, M. A., Chen, G., Maxwell, A., & Hons, B. (2016). Measuring the Sensitivity and Construct Validity of 6 Utility Instruments in 7 Disease Areas. 1–13. https://doi.org/10.1177/0272989X15613522.
Rochefort, C., Baldwin, A. S., & Chmielewski, M. (2018). Experiential Avoidance: An Examination of the Construct Validity of the AAQ-II and MEAQ. In Behavior Therapy (Vol. 49, Issue 3). Elsevier Ltd. https://doi.org/10.1016/j.beth.2017.08.008.
Roediger, H. L., Putnam, A. L., & Smith, M. A. (2011). Ten Benefits of Testing and Their Applications to Educational Practice. In Psychology of Learning and Motivation - Advances in Research and Theory (Vol. 55). Elsevier Inc. https://doi.org/10.1016/B978-0-12-387691-1.00001-6.
Sellbom, M., Cooke, D. J., Hart, S. D., Sellbom, M., Cooke, D. J., Hart, S. D., Validity, C., Sellbom, M., Cooke, D. J., & Hart, S. D. (2015). Construct Validity of the Comprehensive Assessment of Psychopathic Personality ( CAPP ) Concept Map : Getting Closer to the Core of Psychopathy Construct Validity of the Comprehensive Assessment of Psychopathic Personality ( CAPP ) Concept Map : Getting C. 9013(October). https://doi.org/10.1080/14999013.2015.1085112.
Tamboer, P., & Vorst, H. C. M. (2015). A new self-report inventory of dyslexia for students: criterion and construct validity. Dyslexia (Chichester, England), 21(1), 1–34. https://doi.org/10.1002/dys.1492.
Taylor, S., Girdler, S., Parsons, R., McLean, B., Falkmer, T., Carey, L., Blair, E., & Elliott, C. (2018). Construct validity and responsiveness of the functional Tactile Object Recognition Test for children with cerebral palsy. Australian Occupational Therapy Journal, 65(5), 420–430. https://doi.org/10.1111/1440-1630.12508.
Widana, I. W. (2017). Higher Order Thinking Skills Assessment (HOTS). JISAE (Journal of Indonesian Student Assessment and Evaluation), 3(1), 32–44. https://doi.org/10.21009/JISAE.031.04.
Wong, I. H., Denkers, M., Urquhart, N., & Farrokhyar, F. (2013). Construct validity testing of the Arthroscopic Knot Trainer ( ArK ). https://doi.org/10.1007/s00167-013-2524-x.
Downloads
Published
How to Cite
Issue
Section
License
Authors who publish with the Journal of Evaluation and Research in Education (JERE) agree to the following terms:
- Authors retain copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (CC BY-SA 4.0) that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work. (See The Effect of Open Access)