Strides in Development of Medical Education

Document Type : Original Article

Authors

1 Oral and Dental Diseases Research Center, Department of Pediatric Dentistry, School of Dentistry, Kerman University of Medical Sciences, Kerman, Iran

2 Department of Pediatric Dentistry, School of Dentistry, Kerman University of Medical Sciences, Kerman, Iran

3 Endodontology Research Center, Department of Restorative Dentistry, School of Dentistry, Kerman University of Medical Sciences, Kerman, Iran

4 Dentist, Private Practice, Kerman, Iran

Abstract

Background: Assessment plays a great role in encouraging learning and evaluating whether the learning objectives have been achieved.
Objectives: This study aimed to assess the knowledge and attitudes of dental faculty members toward Patient Management Problem (PMP) and Multiple-Choice Question (MCQ) tests.
Methods: In this descriptive-analytic cross-sectional study, a questionnaire was used to collect information from 54 faculty members of the dental school of Kerman University of Medical Sciences from 2019 to 2020. This study’s questionnaire consisted of two parts: the first part included demographic information, and the second included questions related to the faculty members' knowledge (10 questions) and attitudes (8 questions) towards MCQ and PMP tests. Cronbach’s alpha was considered to be 0.8. Content validity was assessed to determine the validity of the questionnaire. SPSS 20 was used to analyze the data, which included descriptive statistics such as percentage, mean, and standard deviation, as well as non-parametric tests such as the Kruskal-Wallis and linear regression tests.
Results: Fifty four dentistry faculty members returned the completed questionnaires in this study. Amongst them, 34(63%) were female, and 20(37%) were male. The mean scores of the knowledge and attitudes questionnaire toward the MCQ and PMP tests were 7.20 and 27.83, respectively. The Mean scores of the knowledge and attitudes had no significant relationship with age, gender, and teaching experience.
Conclusion: Based on the results, it can be concluded that the dental faculty members had good knowledge about MCQs and PMPs structures and their strengths and weaknesses. There was also a good awareness of the shortcomings of the MCQ test in the assessment of clinical reasoning and the lack of transparency of this assessment tool.

Keywords

Background

Assessment is essential in encouraging learning and evaluating whether the learning goals have been reached. A multiple-choice question (MCQ) is composed of two parts: a stem that identifies the question or problem and a set of alternatives or possible answers that contain a key that is the best answer to the question and several distracters that are plausible but incorrect answers to the question. Students respond to MCQs by indicating the alternative they believe best answers or completes the stem. The patient management problem (PMP) is an instrument increasingly used to assess medical competence. The PMP attempts to put the student figuratively into a setting recognizable as belonging to real life and, within that setting, presents a clinical problem for solution or management. In contrast to the MCQ, which would simply have the choices scored as correct or incorrect, the PMP records such scores and gives the test-taker the results of the selected actions. Classic student assessments are generally based on knowledge-based models. Assessing medical students' knowledge and information is possible through MCQ, while assessing clinical skills requires more powerful methods (1-3). The PMP is a written test that begins with a description of the patient's problem, and students should gather the necessary information from the clinical files. Finally, they make a decision about the appropriate diagnosis and treatment plan (3). The PMP test exposes students to realistic situations and forces them to find a solution to a clinical problem (2). A study by Ben Abdelaziz et al. in 2018 showed that more than 90% of PMP-evaluated students considered it a valuable educational tool and agreed with its everyday use in education in Tunisia (4).

Zamani et al. (2017) compared MCQ and key features of this examination method in measuring the strength of students' clinical reasoning strength. Their results showed that MCQ tests could not measure students' clinical reasoning strength (5). Mahmoodi
et al. (in 2014) compared students' ability to answer the PMP test and the Modified Descriptive Question (MEQ) with the MCQ test and found that low scores on the PMP tests and MEQ indicated the inability of students to answer these types of tests compared to MCQ (6). Zafar et al. (2011) showed that a well constructed MCQ is superior to MEQ in testing the higher cognitive skills of medical students in a problem based learning setup (7).

Besides technical expertise, dental care's success depends on the dentist's and the patient's behavioral patterns and the way they interact with each other. A dentist’s positive attitude and communication skills significantly affect a patient’s well-being. In the past decade, measuring the clinical performance of dental students has received much attention. In 2011, Monajemi et al. showed that clinical reasoning in Iranian dental education had not received enough attention and that tests conducted in the country evaluate archives and knowledge (1). In 2005, Fakhri et al. evaluated the quality of the assessment of dental students by the faculty at Ahvaz Jundishapur University of Medical Sciences. They showed that 94.9% used multiple-choice questions in the exams (2).

Most studies have examined the knowledge and attitudes of students toward various scientific and clinical evaluation methods (7-9). Still, the knowledge and attitudes of faculty members in conducting these tests are far more important.

This importance stems from the fact that faculty members should be able to choose the type of test according to their expectations of the students’ learning levels. If the designer of the questions does not know the proper features of the p-questions, a low-quality question is formed, and the students' information cannot be adequately examined. Adequate design of questions is essential for medical students because they perform clinical procedures, and if their knowledge is assessed inadequately, there is the possibility of misdiagnosis and malpractice. Therefore, being familiar with the proper forms of question design methods is very important for faculty members.

Objectives

This study aimed to assess the knowledge and attitudes of Kerman University dental faculty members toward PMP and MCQ exams in assessment dental students.

Methods

This descriptive-analytic cross-sectional study was conducted by distributing a questionnaire among faculty members of Kerman Dental School of Kerman University of Medical Sciences in 2019 - 2020. The study population consisted of faculty members of Kerman Dental School. The inclusion criterion worked in the School of Dentistry in 2019-2020. The exclusion criterion was less than one year of teaching experience. Among all 65 faculty members, 54 were included in the study after controlling the inclusion and exclusion criteria. Questionnaires were distributed among them in every department, and every question was answered. Then the questionnaires were collected. This study was approved by the Research Ethics Committee of Kerman University of Medical Sciences. (Ethics Code: IR.KMU.REC.1399.136)

The researcher-made questionnaire for this study consisted of two parts: the first part included demographic information (age, gender, employment status, and the duration of teaching in the dental school). The second part consisted of questions related to the knowledge (10 questions) and attitudes
(8 questions) of the faculty members towards MCQ and PMP tests.

For evaluation of the content validity of the questionnaire, it was sent to 10 medical education specialists, and their opinions were integrated into the questionnaire. The content validity index (CVI) of the questionnaire in all questions except three was above 75%, which is desirable. Questions with a CVI of less than 75% were adjusted according to the experts’ recommendations. For evaluation of the reliability of the questionnaire, it was completed by 20 faculty members of the School of Dentistry, and Cronbach's alpha coefficient was calculated as 0.8 for the whole questionnaire.

The answers to the knowledge questions were rated as true (1), false (0), and I don’t know (0). Therefore, the total score was between 0 and 10. Also, the answers to the attitude questions were rated on a five-point Likert scale as strongly agree (5), agree (4), no opinion (3), disagree (2), and strongly disagree (1). Therefore, the total score was between 1 and 40. A higher score in both subscales indicated better knowledge or attitude toward MCQ and PMP tests. Data analysis was conducted by SPSS 20, and descriptive statistics such as percentage, mean, and standard deviation, as well as the nonparametric Kruskal Wallis and linear regression tests. The significance level was set as P-value> 0.05.

Results

In this study, 54 faculty members of the Kerman School of Dentistry participated; among them, 63% were female. The mean age of the study subjects was 39.87 ± 9.3 years old. In the Kolmogorov-Smirnov test, the distribution of the knowledge score and attitudes was normal; therefore, regression was used (P-value
> 0.05). Descriptive statistics of the demographic data are presented in Table 1. The results of this study showed that females over 40 years old, those with more than ten years of teaching experience, or those employed had a higher awareness score than other individuals, but this difference was not significant. The results also showed that people with a teaching experience of 5 to 10 years and academics with a service commitment period had higher attitude scores, but this difference was not significant.

 

Table 1. Descriptive statistics of demographic variables

Variables

Categories

N (%)

Age

Below 40 years

35(64.8%)

Above 40 years

19(35.2%)

Graduation from general dentistry course

1980-1990

5(9.3%)

1990-2000

9(16.7%)

2000-2010

21(38.8%)

2010-2020

19(35.2%)

Graduation from specialty dentistry course

1980-1990

0(0%)

1990-2000

9(16.7%)

2000-2010

9(16.7%)

2010-2020

36(66.6%)

Employment status

Committed to service

14(25.9%)

Recruitment

40(74.1%)

Teaching experience

Less than five years

24(44.4%)

5 to 10 years

13(24.2%)

More than ten years

17(31.4%)

 

The frequency of the responses to the knowledge and attitudes toward PMP and MCQ tests are presented in Table 2 and Table 3. According to Table 2, the most frequent correct answer was in response to question 1 about the framework of the questions. Also, the least frequent correct answer was related to question 8 about the validity and reliability of the questions. According to Table 3, the most agreement was that scoring is faster with MCQ, but the most disagreement was observed in question 3, which related to MCQ thinking skills.

The mean and standard deviation of the knowledge questionnaire was 7.20 ± 1.8, and the mean and standard deviation of the attitude questionnaire was 27.83 ± 3.2. The mean scores of knowledge and attitudes were not significantly related to age, gender, and teaching experience (P-value> 0.05) (Table 4).

Discussion

The study results showed that the faculty members of Kerman Dental School had an acceptable level of knowledge about the correct structure and appropriateness of MCQ and PMP tests and their other strengths and weaknesses. They had a positive attitude towards combining other assessment tools with MCQ tests for assessing students and a positive attitude toward the PMP test’s ability to assess students' clinical reasoning skills. Attitude and knowledge scores did not differ significantly amongst different age groups, and the time elapsed since graduation was relatively high in all these groups. It can be contended that this demonstrates a good understanding of the educational staff of this college about the correct and appropriate structural frameworks of MCQ and PMP exams.

In 2009, Fakhri et al. demonstrated that 94.9% of the faculty members used the MCQ test in exams (2). In the present study, most faculty members of Kerman Dental School believed that the MCQ test was the best form of objective test in terms of ease of evaluating the answers, and it is a quick and easy-to-score assessment method.

 

Table 2. Frequency of the responses to the knowledge questions

Questions

True

False

I don’t know

N(%)

N(%)

N(%)

Question 1: Multiple-choice test  includes several questions, each consisting of a central part and several answers, and the student chooses the correct option (question-answer) from the proposed options

47(87)

3(5.6)

4(7.4)

Question 2: In the multiple-choice test, each question must measure an important topic
or an educational goal.

44(81.5)

6(11.1)

4(7.4)

Question 3: In the multiple-choice test, a question’s options must be homogeneous and related to a single topic.

38(70.4)

13(24.1)

3(5.6)

Question 4: In the multiple-choice test, between 3 and 5 options should be designed
for each question.

46(85.2)

5(9.3)

3(5.6)

Question 5: In a multiple-choice test, no more than one problem should be included in each question.

33(61.1)

19(35.2)

2(3.7)

Question 6: In the multiple-choice test, the test is measured only in the field of knowledge.

41(75.9)

8(14.8)

5(9.3)

Question 7: In the patient problem management test, a clinical scenario is presented,
and then questions about obtaining a history, examination, diagnosis, and treatment measures are asked of the subject.

46(85.2)

2(3.7)

6(11.1)

Question 8: The reliability and validity of the patient problem management test are low.

15(27.8)

18(33.3)

21(38.9)

Question 9: In the patient problem management test, the test subject is measured in
the areas of knowledge, attitude, and skills.

41(75.9)

4(7.4)

9(16.7)

Question 10: The patient problem management test is time-consuming.

38(70.4)

10(18.5)

6(11.1)

 

Table 3. Frequency of answers to the attitude questions

Questions

Strongly Disagree

Disagree

No Opinion

Agree

Strongly Agree

N(%)

N(%)

N(%)

N(%)

N(%)

Question 1: In my opinion, the multiple-choice test has uniform questions and is the best type of objective test in terms of ease of correcting the answers.

5(9.3%)

3(5.6%)

4(7.4%)

29(53.7%)

13(24.1%)

Question 2: In my opinion, multiple-choice testing is a quick method, and it is easy to score.

1(1.9%)

2(3.7%)

1(1.9%)

33(61.1%)

17(31.5%)

Question 3: In my opinion, the multiple-choice test assesses thinking skills.

9(16.7%)

32(59.3%)

3(5.6%)

9(16.7%)

1(1.9%)

Question 4: In my opinion, the multiple-choice test covers evaluating different educational areas in one test.

1(1.9%)

15(27.8%)

9(16.7%)

28(51.9%)

1(1.9%)

Question 5: In my opinion, the multiple-choice test evaluates only the knowledge of the test subject.

0(0.0%)

10(18.5%)

2(3.7%)

34(63%)

8(14.8%)

Question 6: In my opinion, multiple-choice testing is not a transparent review tool and should be combined with other evaluation tools.

0(0%)

1(1.9%)

4(7.4%)

38(70.4%)

11(20.4%)

Question7: In my opinion, the patient problem management test assesses limited areas of learner knowledge about diseases.

8(14.8%)

30(55.6%)

3(5.6%)

11(20.4%)

2(3.7%)

Question 8: In my opinion, the patient problem management test is a method of assessing students' clinical reasoning power.

0(0%)

3(5.6%)

4(7.4%)

31(57.4%)

16(29.6)

 

In the present study, most faculty members believed that the MCQ test was not a transparent assessment instrument and should be combined with other assessment tools. In 1998, Hammond et al. observed that the scores obtained in an MCQ exam were higher than the actual information of the students about
the questions, and some of these scores were obtained by conjecture and chance. It was concluded that the MCQ test is not a transparent assessment tool for evaluating students (10). Also, in 2022, Darmiani and Ebrahimipour demonstrated that students’ scores on the PMP test were lower than their scores on the MCQ test, implying their unfamiliarity with the PMP tests and poor performance in responding to PMP tests (11). Students’ viewpoints and attitudes toward the MCQ demonstrated negative impressions and proposed the superiority of other learning methods (12). As an evaluation method, MCQ might be a valuable process to enhance medical students’ learning, despite doubts raised on its real efficiency and pitfalls in terms of time and effort.

In a study by Zamani et al. conducted in 2017, the MCQ, the most common method of assessing students today, could not adequately measure students' clinical reasoning power (5). In the current study, most faculty members believed MCQ tests could not evaluate thinking skills.

The findings of Esmaeili (2015) (13), Mahmoudi (2014) (6), Palmer (2007) (14), and Zafar (2011) (7) studies indicated that a lack of correlation between PMP and MCQ test scores and academic achievements indicate students' weakness in reasoning and clinical judgment despite their high GPA and scientific knowledge. Moreover, in 2017, Zamani (5) and in 2018, Ben Abdelaziz (4) concluded that the PMP test is an excellent way to evaluate students’ clinical reasoning skills. In the present study, most faculty members believed that PMP was an effective method for assessing students' clinical reasoning power.

In 1985, the findings of a study conducted by Norcini et al. showed that the reliability and validity of the PMP test were lower than MCQ (15). In the present study, the majority of the faculty members were unaware of the reliability and validity of the PMP test, were ill-informed about it, and 27.8% considered the reliability and validity of the PMP low.

 

Table 4. Measuring the effect of demographic factors on knowledge and attitude towards PMP and MCQ by linear regression

 

Categories

Dependent variable = total Score of knowledge

Dependent variable = The total score of attitudes

Coef

SE of Coef

P-Value

Coef

SE of Coef

P-Value

Constant

 

5.22

0.71

<0.001

0.64

0.52

0.22

Age

Below 40 years

Reference

Reference

Above 40 years

1.04

1.37

0.45

0.87

0.76

0.26

Gender

Male

Reference

Reference

Female

0.02

0.37

0.97

-0.20

0.41

0.62

Employment

Committed to service

Reference

Reference

Recruitment

0.51

0.45

0.26

0.03

0.50

0.94

Teaching experience

0-5 years

Reference

Reference

Less than five years

0.47

0.71

0.51

-0.25

0.79

0.75

5 to 10 years

0.11

0.61

0.85

0.04

0.68

0.95

 

In designing MCQ tests, the examiner considers whether the question impacts students’ competence and decides to include the theme in the question. Therefore, each objective question should either measure an educational goal or include an important part of the curriculum. Nitko says:” First, select an educational goal or an important topic, and decide how many questions you wish to allocate to it; then take the necessary steps to design each question” (16). In the present study, 81.5% of the faculty members agreed with the notion that in the multiple-choice questions, each question should measure an important topic or an educational goal.

In multiple-choice questions, all options in a question should be homogeneous in content, and they should not be related to different subjects (17). If necessary, the question should be excluded, and each option should become a right-or-wrong question and independent of other options. In the present study, 70.4% of the faculty members were aware that all the options should be homogeneous and related to one topic in the MCQ.

In MCQ tests, the selection of 3 to 5 options is not technically different. Designing more options makes it more difficult for students to guess the correct answer. However, as it is challenging to design many deviant options, sometimes a smaller number of options (3 to
4 options) is favored. In particular, having fewer options decreases the time necessary to read them (18). In the present study, 85.2% of the faculty members were fully aware of this issue.

Each question should be related to only one subject or one educational goal. If there is more than one subject or goal in the question, it can complicate the question (19). In the present study, 61.1% of faculty members agreed with this notion.

According to the famous taxonomy of Benjamin Samuel Bloom, learning includes cognitive, emotional (attitude), and psychomotor (skill) domains (20). Patient problem-solving tests can assess clinical competency in these areas (21). In 2017, a study by Kalhori et al. showed that considering the average validity of the whole test, the mean difficulty coefficient, and taxonomy indexes I, II, and III, the tests designed by the faculty of Allied Science were within an acceptable standard range (22). In the present study, most faculty members were aware of this feature of the PMP test.

However, MCQs frequently have imperfections called item-writing flaws (IWFs). The occurrence of IWFs in question may arise from inadequate training and knowledge of academic teachers in the subject of MCQ writing (23), lack of engagement (24), and time constraint due to other academic obligations (25). In 2008, Vyas and Supe et al. reported that flaws in MCQ writing were primarily due to insufficient faculty member training (18). In 2020, a study by Gupta et al. showed that a single short-duration training session was insufficient to prevent MCQ-writing flaws. Therefore, there is a need to focus on faculty member training on MCQ writing. Implementation of
longer-duration courses supplemented by repeated or continuous faculty development programs is pivotal. (26) On the other hand, in 2020, Sezari et al. showed that a one-day short workshop for MCQs improved the faculty members’ capacities and was practical for the faculty. Still, short-term repetitive workshops could yield better results (27).

Since the Patient Problem Management test has low validity, achieving the desired validity takes a long time. Ideally, this test requires 90 minutes to respond (21). In the present study, most faculty members were aware of the time-consuming nature of this type of test.

The results of the present study showed that the faculty members of Kerman Dental School have a good level of knowledge about the correct and appropriate structure of MCQ and PMP tests, as well as their strengths and weaknesses. The reason faculty members do not use the correct form of designing questions even though they are aware of their characteristics is unclear. Proper design of questions within the standard framework can lead to an accurate assessment of students and may result in more accurate clinical decisions. Despite the appropriate level of information of faculty members, the need for retraining courses to refresh their knowledge of question design seems necessary. After all, students who pass a poorly designed exam without possessing adequate knowledge of the topic can pose a real threat to their future patients.

The main limitation of this study was the lack of cooperation of all dental school faculty members due to the COVID-19 outbreak.

Conclusion

The results of the present study showed that the faculty members of Kerman Dental School had good knowledge of the structure of MCQs and PMPs and their strengths and weaknesses. There was also a belief amongst faculty members that, in contrast to the PMP test, the MCQ test had limitations in assessing the clinical reasoning and transparency of dental students. This study's results should be used to analyze this type of evaluation test, and faculty members should be provided feedback to improve their question-design skills.

  1. Monajemi A, Adibi P, Soltani Arabshahi K, Arbabi F, Akbari R, Custers E, et al. The battery for assessment of clinical reasoning in the Olympiad for medical sciences students. Iranian Journal of Medical Education. 2011; 10(5): 1056-67. doi:10.4103/2277-9531.94420. [PMCID: PMC3577397]. [PMid:23555113].
  2. Fakhri A, Komeili Sani H, Shakurnia A. Evaluation of students' evaluation methods by professors in clinical settings of Ahvaz Jundishapur University of Medical Sciences. Proceedings of the 7th National Conference on Medical Education; 2005 Dec 1-3; Tabriz, Iran. [In Persian]
  3. van der Vleuten C, Newble DI. How can we test clinical reasoning? THE LANCET. 1995; 345(8956): 1032-4. doi:10.1016/s0140-6736(95)90763-7.
  4. Ben Abdelaziz R, Hajji H, Boudabous H, Ben Chehida A, Mrad-Mazigh S, Azzouz H, et al. Patient-management Problem (PMP) for paediatrics learning: Value and students perceptions. Tunis Med. 2018;96(1):1-5. [PMID: 30324984]
  5. Zamani S, Amini M, Masoumi SZ, Delavari S, Namaki MJ, Kojuri J. The comparison of the key feature of clinical reasoning and multiple-choice examinations in clinical decision makings ability. Biomed Re. 2017;28(3):1115–9.
  6. Mahmoodi M, Dehghani M. Comparison the Students Ability in Answering to Patient Management Problem and Modified Essay Question Examination with Multiple Choice Question Examination and its Association with Educational Promotion. Stride Dev Med Educ. 2014; 11(2): 187-95. [In Persian]
  7. Zafar Khan M, Aljarallah BM. Evaluation of Modified Essay Questions (MEQ) and Multiple Choice Questions (MCQ) as a tool for Assessing the Cognitive Skills of Undergraduate Medical Int J Health Sci(Qassim). 2011;5(1):39-43. [PMCID: PMC3312767]. [PMID: 22489228]
  8. Azer S. Assessment in a problem‐based learning course: Twelve tips for constructing multiple choice questions that test students' cognitive skills. Biochemistry and Molecular Biology Education. 2003;31(6): 428-34. doi:10.1002/bmb.2003.494031060288.
  9. Bazrafkan L, Shokrpour N, Torabi K. Comparison of the Assessment of Dental Students’ Laboratory Performance through MCQ and DOPS Methods, J Med Educ. 2009; 13(1&2):e105382. doi:10.22037/jme.v13i1,2.1106.
  10. Hammond EJ, McIndoe AK, Sansome AJ, Spargo PM. Multiple-choice examinations: adopting an evidence-based approach to exam technique. Anaesthesia. 1998 Nov;53(11):1105-8. doi:10.1046/j.1365-2044.1998.00583.x. [PMID: 10023280]
  11. Darmiani S, Ebrahimipour S. Comparison of Two Methods of Dental Students Assessment (MCQ and PMP) and their correlation with the total grade-point average. Journal of Dentomaxillofacial Radiology, Pathology and Surgery. 2022; 11(1):14-8.
  12. Wynter L, Burgess A, Kalman E, Heron JE, Bleasel J. Medical students: what educational resources are they using? BMC Med Educ. 2019 Jan 25;19(1):36. doi: 10.1186/s12909-019-1462-9. [PMCID: PMC6347772]. [PMID: 30683084]
  13. Esmaeili S, Sajadi F.S, Mahmoodi M, Zangiabadi P. Comparison of Dental Students' Ability to Answer PMP-MEQ and MCQ Tests and Its Association with Educational Progress. Sch J Dent Sci. 2015; 2(5):330-5.
  14. Palmer EJ, Devitt PG. Assessment of higher order cognitive skills in undergraduate education: modified essay or multiple choice questions? Research paper. BMC Med Educ. 2007 Nov 28;7:49. doi:10.1186/1472-6920-7-49. [PMID: 18045500]. [PMCID: PMC2148038]
  15. Norcini JJ, Swanson DB, Grosso LJ, Webster GD. Reliability, validity and efficiency of multiple choice question and patient management problem item formats in assessment of clinical competence. Med Educ. 1985;19(3):238-47. doi:10.1111/j.1365-2923.1985.tb01314.x. [PMID: 4010571]
  16. Nitko A J. Educational Tests and Measurement: An Introduction. New York: Harcourt Brace Jovnovich Press; 1983: 305-7. doi:10.1177/026553228400100113.
  17. Brame CJ. Writing good multiple choice test questions.2013. [cited 2013 Jun 2]. Available from: https://cft.vanderbilt.edu/guides-sub-pages/writing-good-multiple-choice-test-questions/.
  18. Vyas R, Supe A. Multiple choice questions: a literature review on the optimal number of options. Natl Med J India. 2008;21(3):130-3. [PMID: 19004145]
  19. Jerard K. Writing multiple-choice test items Practical Assessment. Practical Assessment, Research, and Evaluation. 1994;4(1):3. doi:10.7275/s3cc-7y76.
  20. Krathwohl D.R. A Revision of Bloom's Taxonomy: An Overview. Theory Into Practice. 2002;41(4):212-8. doi: 10.1207/s15430421tip4104_2.
  21. Jesmi A, Sanagoo A, Jouybari L. Can We Use The Key Feature Tests For The Assessment Of Clinical Reasoning Of Nursing Students? J Educ Ethics Nurs. 2017; 6(1 and 2):10-4.
  22. Pourmirza Kalhori R, Abbasi M. Are Faculty Members of Paramedics Able to Designed Accurate Multiple Choice Questions? Global Journal of Health Science. 2017;9(1):211-6. doi:10.5539/gjhs.v9n1p211.
  23. Tarrant M, Ware J. Impact of item-writing flaws in multiple-choice questions on student achievement in high-stakes nursing assessments. Med Educ. 2008;42(2):198-206. doi:10.1111/j.1365-2923.2007.02957.x. [PMID:18230093]
  24. Malau-Aduli, Bunmi S, Zimitat C.  Peer review improves the quality of MCQ examinations. Assessment and Evaluation in Higher Education. 2012;37(8):919-31. doi: 10.1080/02602938.2011.586991.
  25. Nedeau-Cayo R, Laughlin D, Rus L, Hall J. Assessment of item-writing flaws in multiple-choice questions. J Nurses Prof Dev. 2013;29(2):52-7. doi:10.1097/NND.0b013e318286c2f1. [PMID: 23657034]
  26. Gupta P, Meena P, Khan AM, Malhotra RK, Singh T. Effect of Faculty Training on Quality of Multiple-Choice Questions. Int J Appl Basic Med Res. 2020;10(3):210-4. doi: 10.4103/ijabmr.IJABMR_30_20. [PMCID: PMC7534721]. [PMID: 33088746]
  27. Sezari P, Tajbakhsh A, Massoudi N, Arhami Dolatabadi A, Tabashi S, Sayyadi S, et al. Evaluation of One-Day Multiple-Choice Question Workshop for Anesthesiology Faculty Members. Anesth Pain Med. 2020 Dec 13;10(6):e111607. doi:10.5812/aapm.111607. [PMCID: PMC8207881]. [PMID: 34150580]