Saudi Journal for Health Sciences

ORIGINAL ARTICLE
Year
: 2020  |  Volume : 9  |  Issue : 2  |  Page : 77--83

Student-generated formative assessment and its impact on final assessment in a problem-based learning curriculum


Mazhar Mushtaq, Muhammad Abdul Mateen, Khawaja Husnain Haider 
 Department of Basic Sciences, College of Medicine, Sulaiman Al-Rajhi Univesity, Al Bukayriyah, Al-Qassim, Saudi Arabia

Correspondence Address:
Mazhar Mushtaq
Sulaiman Al Rajhi University, Al Bukairiyah, Al-Qassim
Saudi Arabia

Abstract

Objective: Besides modified essay questions (MEQs), well-structured multiple-choice questions (MCQs) are one of the most well-established tools for both summative and formative assessment strategies due to their applicability and objectivity. Nevertheless, the construction of high-quality MCQs is a time-intensive exercise for the medical faculty. As our curriculum in medicine is based on problem-based learning (PBL) approach wherein students are central to the process of teaching and learning, we aimed to study the beneficial effects of students' involvement in MCQ writing on their learning process. We hypothesized that the involvement of students in MCQ writing would be a motivational exercise for their improved learning. Methods: A retrospective study was designed in the medical school to analyze end-of-the-block examination feedback data from students (n = 287), spanning over three academic sessions for Block 1.4 entitled “Thinking and Doing”. The proposed activity was carried out with or without the involvement of students in MCQ writing and it formed a part of their formative assessment during the block. Results: Analysis of the students' feedback conducted by the Planning, Development, and Quality Assurance survey using Qualtrics at the end of the course, supported by the Google survey conducted by the block coordinator, showed that student performance was significantly improved when they were involved in an incentive-driven MCQ writing exercise for formative assessment and an in-depth discussion at the end of each tutorial session of their PBL curriculum during the block.Conclusions: Our data strongly support our hypothesis that the formative assessment of MCQs written by the students enhances their learning in the PBL-based curriculum.



How to cite this article:
Mushtaq M, Mateen MA, Haider KH. Student-generated formative assessment and its impact on final assessment in a problem-based learning curriculum.Saudi J Health Sci 2020;9:77-83


How to cite this URL:
Mushtaq M, Mateen MA, Haider KH. Student-generated formative assessment and its impact on final assessment in a problem-based learning curriculum. Saudi J Health Sci [serial online] 2020 [cited 2020 Sep 27 ];9:77-83
Available from: http://www.saudijhealthsci.org/text.asp?2020/9/2/77/292651


Full Text



 Introduction



Assessment is fundamental to the process of teaching and learning undertaken for a specific purpose and is the final step in the process of curriculum delivery.[1] Although the primary aim of the assessment was to assess student learning, it is now being amended to assist the students for their enhanced learning as part of a more systematic “Assessment for learning” approach.[2]

In this regard, formative assessment facilitates active learning by engaging both the learners and their peers to bring forth their strengths and open room to work on their weaknesses.[3],[4] Averse to the summative assessment, formative assessment is process focused, rather than outcome intended. It is primarily fixated on early identification of students' deficiencies in learning, course contents, and instructional design and offering them insights into areas of improvement. Since these two assessment approaches complement each other, an alignment in their question format is a prerequisite for their success.

Since their inception in the 1950s, multiple-choice questions (MCQs) are increasingly used for knowledge assessment during formative and summative approaches.[5] Their widespread acceptance for assessment in both clinical and nonclinical settings is bracketed with logistic advantages and cost-effectiveness besides high objectivity, validity, and reliability.[6],[7]

Well structured MCQs have exceptional ability to test knowledge and factual recall, thus assessing the base of Bloom's Taxonomy. However, MCQs are also gaining widespread acceptance to assess the higher-order cognition, but only after modifying their format to match with the required taxonomic level.[8]

Although various formats of MCQs have been devised, creating high-quality MCQs remains a challenge for the medical faculty who find it increasingly difficult due to time constraints.[9],[10] As the students' involvement is critical during the process of assessment, the active involvement of students in MCQ writing has been explored as an active learning strategy with promising results.[9],[11] Moreover, a quiz-based formative assessment enhances students' motivation in the learning process.[12]

A vast majority of the students have difficulty in answering MCQs. Hence, a mixed response was observed in the students' feedback regarding the MCQs, i.e., questions were not phrased; questions were not aligned with the learning objective; and questions inquired about deeper learning concepts. Although “assessment drives learning” is a generally accepted norm in the assessment literature, nevertheless, if assessment tools are not well implicit, it will fail to drive learning; instead, a student will only think of passing the examination.[13],[14],[15]

Our medical section of the university follows a student-centered problem-based learning (PBL) approach for curriculum delivery in the medical program. Hence, it provides an excellent opportunity to explore the possibility of student-led MCQ as a formative tool and to study its impact on student performance at the end of the block summative assessment. PBL employs a flexible approach to assessment and includes a variety of assessment tools. Furthermore, one of the groups has shown the benefit of such a flexible approach that has been in use in PBL curriculum across the world.[16]

The primary aim of the retrospective study was to analyze the student feedback data in Block “1.4” entitled “Thinking and Doing” and to address a common problem reported in the student feedback claiming “a missing link between the examination and the course contents.” We hypothesized that the involvement of students in MCQ writing would motivate them for improved learning, and students will themselves know what sort of MCQs are expected from a particular learning goal. We validated our working hypothesis by analyzing student feedback about their end-of-block examination with or without their involvement in MCQ writing.

 Methods



The notion of student-generated MCQs was proposed during the two planning group meetings conducted before the commencement of Block 1.4 for the academic session 2017–2018, which was unanimously accepted by the planning group members.

The medical curriculum taught in our institute spans over 6 years; the first 3 years encompasses teaching of basic sciences, whereas the final 2 years focuses on clinical sciences. Our novel medical curriculum is based on an integrated, problem-based, and student-centered approach. The basic science curriculum of the first 3 years is further categorized into blocks and clusters of 4–8 weeks' duration. Each block is encompassing integration of various theme-based topics and each block can be addressed as a “module” for wider understanding. The current study was done in Block 1.4 named “Thinking and Doing.”

The process was followed for two consecutive academic years for the same block. The students were briefed about this activity during the block-opening lecture by the coordinator. Furthermore, the tutors and students were provided in-depth information regarding the best practices for MCQ writing, and the following steps were followed:

In the block-opening session, the students were orientated about MCQ writing. They were provided with information on how to write MCQs including:

The components of a good MCQ – stem, answer key, and proper distractorsBloom's Taxonomy for learning and assessment including the various levels of cognition/difficulty level – C1, C2, and C3Use of single best answer type questions as they reflect our end of block assessment.

During the postdiscussion of each case, the students were required to bring 5–7 MCQs related to the learning objectives of the case being discussed. The leader for the case was assigned the task to collect these questions in a single fileThe selection of MCQs was based on the intended learning objectives (ILOs). These ILOs were aligned with SAUDIMEDS approved by the college educational council. Second, the level of students was taken into consideration, and most questions selected were of C1 and C2 level, reflecting the level of student's year 1. Case-derived scenario-based MCQs were also selected from the pool, as we wanted to assess the students' ability to apply the knowledge gained during the tutorial sessions. An open question and answer session was used to check for student understandingAll tutors were directed by the coordinator to ensure that the case-relevant MCQs were discussed at the end of each postdiscussionThe MCQ-relevant discussion was concluded with in-depth reasoning to justify the selection of the right answer and for rejection of the distractors in each MCQThe tutors in each tutorial group encouraged the full participation of the students in this activity in their respective groups.

All the MCQs were collected by the tutors and uploaded on the E-learning, giving all the students from all tutorial groups access to all the MCQs with keys.

In the end, we estimated to have a large volume of about 350 MCQs, which should be enough for the students to practice and prepare for their final examination, which is usually computer-based and consists of 70–75 MCQs. As an incentive, in the final examination, 5%–8% of the total MCQs were randomly selected from the large pool of student-generated MCQs, and their stem was slightly rephrased.

After the final examination, the students filled in the online feedback form designed by the planning quality and development assurance (PDQA) department. The student feedback is a routine practice in every academic block and cluster wherein some items are graded from 1 to 10 and additionally include a section for the open comments. The computer-generated student feedback report is then sent to the block coordinator, who then makes a complete analysis to design an improvement plan for the block for the next academic year in consultation with the planning group. We used some of the relevant items of this report in this study, and for comparison, we compared these numbers with the academic year of 2016/2017.

Besides PDQA-generated feedback, the block coordinator also designed a Google survey consisting of seven items related to this activity. The students completed the survey, and the average response was recorded.

Statistical analysis was generated using Students t-test with IBM SPSS, version 25, USA and P < 0.05 was considered statistically significant.

No ethical approval was needed for this study. None of the authors or the students were aware that student-generated case-related questions would have such an impact. After 2 years when coordinator of the block went back to review the two course report and found out significant outcome of his intervention.

 Results



The purpose of the study was to demonstrate the significance of the formative assessment, which was introduced in the academic year 2017–2018. In 2016–2017, students had critically graded the feedback of Block 1.4. Our main consideration was the examination-related items that were poorly graded by the students, and among these essential items, most critical was “Was there a link between the block and the final examination.”

To overcome this, student-generated formative assessment for two consecutive years of the same block included 103 and 101 students for academic sessions 2017–2018 and 2018–2019, respectively. For comparison, data from academic session 2016–2017 were included, which had 83 students. As shown in [Figure 1]a, Changes were recorded in all the items, after the induction of formative assessment. [Figure 1]b shows the cumulative score that depicts an increase of 15.4%, P < 0.05, (2017–2018 vs 2016–2017). This improved further to 40% for 2018–2019, P < 0.04. Much reliability of the use of such a formative assessment was noticed in the summative assessment in our study.{Figure 1}

[Figure 2] highlights the difficulty level of the examination and was recorded as a percentage of the student giving a response. The student's perception of examination difficulty level fell by 24%, P < 0.05, and a further 31%, P < 0.04, in the academic years 2017–2018 and 2018–2019, respectively. These data suggest that students were better equipped to MCQs for the final examination after the intervention of formative assessment in 2017–2018. Similarly, in [Figure 2], a very difficult level of the examination fell to 14% and 8% for 2017–2018 and 2018–2019, respectively.{Figure 2}

An apparent increase in the coefficient of internal consistency of the final examination is noticeable during the academic years 2017–2018 and 2018–2019 of the same block. This result implies that the students can comprehend the questions better in Block 1.4 [Table 1].{Table 1}

Student Google survey

With the help of the student's leader of Year-1, we decided to make a Google survey of the students and take their opinion about this activity. Sixty percent of the students participated in the survey.

The results of Google survey in [Figure 3] shows:{Figure 3}

Ninety-three percent of the students believed that the MCQs are the best way to assess their knowledgeSixty-three percent of the students felt that the use of MCQs generated by students helped the students in their learning process. However, 37% of the students disagreed to it and felt that it did not had much impact on the learning processNinety-one percent of the students contributed in this activitySeventy-five percent of the students said that the MCQ activity at the end of postdiscussion helped them recall and apply their learned knowledge. Conversely, 25% of the students disagreed to itSixty-seven percent students recommended the other block coordinators to use such activity in their blocks while thirty-three percent students did not recommend this to other coordinators, as they felt that it did not help their learning process. This corroborates the finding in point 2, as mentioned aboveEighty-two percent of the students found that it was easy to find MCQs for their activityEighty-seven percent students felt that the MCQ preparation helped them prepare for their final examination of the block. However, 13% did not feel that it had any impact on their final examination. This corroborates the finding of point 4, mentioned above, where 25% of the students could not recall and apply their knowledge.

 Discussion



The main finding of our retrospective cross-sectional study is that the active participation of students in formative assessment by contribution in MCQ development significantly enhances their learning that is reflected by their improved performance in the summative assessment at the end of a course. Moreover, this incentive-driven activity helps the students in their better understanding of MCQs as an assessment tool.

Our medical college has robust assessment process and student evaluation system, supported by intricately designed assessment blueprints for every block, cluster, and rotation. This process guides every coordinator for planning during the summative assessment during the tests. More importantly, every assessment moment is followed by a post-examination survey on Qualtrics sent by the PDQA for student feedback of the respective test. The student evaluation of the block assessment from the previous year's shows a weak link between the examination and block contents.

A subsequent meeting of the block coordinator with the students revealed that the students needed more exposure to the MCQ-based test questions to help them prepare better for the final examination. These findings signified that thorough understanding of the assessment tool is imperative for the students.

Formative assessment is integral to the learning process as it produces in-depth information for the learners. This information may be rich, both quantitatively and qualitatively. Formative assessment has to be valid, reliable, and practical such that it should produce positive effects on learning. On the same note, formative assessment should be intended to reveal student's strengths and weaknesses besides creating improvement opportunity for their future development. The outcome of the formative assessment is well-accepted by the students as it enhances their future performance.[17],[18]

Our study was designed to determine the effect of maintaining a formative assessment session each at the end of each post-discussion tutorial session to help the students in their learning and to enhance their MCQ-solving skills. Our novel approach included student-derived MCQs to augment their awareness regarding the “link between the ILOs and the examination.”

Together with modified essay questions, MCQs make the backbone of knowledge assessment methods in both undergraduate and postgraduate medical education. They are also gaining popularity for the assessment of clinical competence.[19],[20] From among the different types, single best answer type MCQs have become a preferred choice for use in formative assessments as they mimic the questions used in summative assessments.[21] Despite their advantages, the time-intensive nature of the MCQ-based formative assessment remains a significant hurdle for the medical faculty due to their busy schedules.[10],[22] The strategy of using student-generated MCQs for formative assessment has emerged as an alternative approach albeit with diverging result.[10],[23]

Hardy et al., in their study, analyzed the impact of student-generated MCQs using a peer-wise approach and reported that the activity promoted deep learning among the students. Although the participating students developed desirable behavior in learning and synthesizing the delivered information, they acknowledged the meager educational value in engaging in this peer-wise activity.[23]

Our results were supported by the previously published data which show that active participation of students in MCQ development fared better for their learning as compared with the group of the students who were passively involved in MCQ assessment.[24] The positive perception of students regarding their active participation in MCQ generation activity has also been reported by others.[24],[25] Moreover, the student-written single best answers have shown a positive correlation with their performance in the final summative examinations.[26]

Our study provides an added incentive for the students to enhance their active participation in the assessment process. We promised to include up to 5% of the student-generated MCQs as part of their final end-of-block summative assessment. Our incentive-driven exercise shows that the students benefitted from the question writing exercise, as was evident from their performance during end-of-block summative assessment.

The students in their feedback recommended not only for the continuation of the activity in the future for Block 1.4 but also suggested to extend it to the other blocks as well. The incentive-driven participation of the students in the MCQ-generation activity helped us in overcoming students' lack of interest in participation due to the burdensome and time-intensive nature of quality MCQs writing.[27]

After the implementation of student's activity of MCQ generation for formative assessment, students' feedback on the item “the link between the block and final examination” for 2018–2019 improved by 20% as compared to the previous academic years for the same block. Student's comprehension improved by 40% as shown by their clarity in understanding questions, preparedness for the final examination, and alignment of the questions with the ILO.

The student's preparedness and familiarity with the MCQ were reflected in their better ratings regarding the difficulty of examination that were perceived to be lowered by 24% and 31% during the following 2 years of MCQ activity implementation. The overall improvement in the internal consistency of the assessment increased to 3%, reflecting the overall assessment of the examination. In addition, our student survey showed that after the implementation of MCQ preparation activity, 92% of students found MCQs to be the most effective method of assessment, while 63% of students found that the activity was helpful in deep learning. Based on their experience, 67% of students recommended the practice be used in future blocks, and 87% felt that the activity helped them during preparation for the final examination.

Despite these encouraging data, our study is not without limitations. First, the study includes data from three academic sessions during two of which the student-generated MCQs were used. Future studies should be designed to include other blocks as well to study a more generalized effect of the activity. Second, it would be prudent to have a training session for the students to generate quality MCQs. This can be complemented with training on the use of e-learning sessions and online strategies such as “student wiki” to conduct formative assessment sessions instead of restricting it to only after postdiscussion of each case.

 Conclusions



Incentive-driven, student-led MCQ development and analysis sessions as part of the “assessment for learning” approach would be a novel strategy to enhance students' learning, especially in the PBL approach for the deliver the medical curriculum. Furthermore, students' performance gets improved, provided if they had understood the essence of the assessment tool during their learning process.

Acknowledgment

We are thankful to all the tutors of this block, who meticulously followed this activity in their respective tutorial groups. PDQA, who's feedback criteria helped us to analyze the data critically and produce the findings in this manuscript. All the members of the planning group of this block, who appreciated the idea of conducting the formative assessment. Last but not least, the assessment unit for their logistic approach in the final examination and providing us the analysis of the examination.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

References

1Swanwick T. Understanding medical education. Evidence Theory and Practice. The Association for the Study of Medical Education. 2nd ed. USA: Wiley-Blackwell; 2013. p. 1-6.
2Brown GT. Is assessment for learning really assessment? Front Educ 2019;4:1-7.
3Black PJ, Wiliam D. The Formative Purpose: Assessment MustFirst Promote Learning, Yearbook of the National Society for the Study of Education. Vol. 2. USA: John Wiley and Sons; 2005. p. 20-50.
4Prashanti E, Ramnarayan K. Ten maxims of formative assessment. Adv Physiol Educ 2019;43:99-102.
5Abdel-Hameed AA, Al-Faris EA, Alorainy IA, Al-Rukban MO. The criteria and analysis of good multiple choice questions in a health professional setting. Saudi Med J 2005;26:1505-10.
6Moss E. Multiple choice questions: Their value as an assessment tool. Curr Opin Anaesthesiol 2001;14:661-6.
7Epstein RM. Assessment in medical education. N Engl J Med 2007;356:387-96.
8Scully D. Constructing multiple-choice items to measure higher-order thinking. Pract Assess Res Eval 2017;4:15-22.
9Tarrant M, Ware J. A framework for improving the quality of multiple-choice assessments. Nurse Educ 2012;37:98-104.
10Harris BH, Walsh JL, Tayyaba S, Harris DA, Wilson DJ, Smith PE. A novel student-led approach to multiple-choice question generation and online database creation, with targeted clinician input. Teach Learn Med 2015;27:182-8.
11Kurtz JB, Lourie MA, Holman EE, Grob KL, Monrad SU. Creating assessments as an active learning strategy: What are students' perceptions? A mixed methods study. Med Educ Online 2019;24:1630239.
12Evans DJ, Zeun P, Stanier RA. Motivating student learning using a formative assessment journey. J Anat 2014;224:296-303.
13Al Kadri HM, Al-Moamary MS, van der Vleuten C. Students' and teachers' perceptions of clinical assessment program: A qualitative study in a PBL curriculum. BMC Res Notes 2009;2:263.
14Cilliers FJ, Schuwirth LW, Adendorff HJ, Herman N, van der Vleuten CP. The mechanism of impact of summative assessment on medical students' learning. Adv Health Sci Educ Theory Pract 2010;15:695-715.
15Cilliers FJ, Schuwirth LW, Herman N, Adendorff HJ, van der Vleuten CP. A model of the pre-assessment learning effects of summative assessment in medical education. Adv Health Sci Educ Theory Pract 2012;17:39-53.
16van der Vleuten CP, Scherpbier AJ, Wijnen WH, Snellen HA. Flexibility in learning: A case report on problem-based learning. Int High Educ 1996;1:17-24.
17Miller A, Imrie B, Cox K. Functions of Assessment. Student Assessment in Higher Education. 1st ed. London: Kogan; 1998. p. 23-40.
18Brown GA, Bull J, Pendlebury M. Assessing Student Learning in Higher Education. 1st ed. London: Routledge, Taylor and Francis Group; 1997. p. 21-88.
19Tangianu F, Mazzone A, Berti F, Pinna G, Bortolotti I, Colombo F, et al. Are multiple-choice questions a good tool for the assessment of clinical competence in internal medicine? Ital J Med 2018;2:88-96.
20Pham H, Trigg M, Wu S, O'Connell A, Harry C, Barnard J, et al. Choosing medical assessments: Does the multiple-choice question make the grade? Educ Health (Abingdon) 2018;31:65-71.
21Tan LT, McAleer JJ; Final FRCR Examination Board. The introduction of single best answer questions as a test of knowledge in the final examination for the fellowship of the Royal College of Radiologists in Clinical Oncology. Clin Oncol (R Coll Radiol) 2008;20:571-6.
22Ware J, Vik T. Quality assurance of item writing: During the introduction of multiple choice questions in medicine for high stakes examinations. Med Teach 2009;31:238-43.
23Hardy J, Bates SP, Casey MM, Galloway KW, Galloway RK, Kay AE, et al. Student-generated content: Enhancing learning through sharing multiple choice questions. Int J Sci Educ 2014;13:2180-94.
24Bottomley S, Denny P. A participatory learning approach to biochemistry using student authored and evaluated multiple-choice questions. Biochem Mol Biol Educ 2011;39:352-61.
25Schullo FA, Janke KK, Chapman SA, Stanke L, Undeberg M, Taylor C, et al. Student-generated, faculty-vetted multiple-choice questions: Value, participant satisfaction, and workload. Curr Pharm Teach Learn 2014;1:15-21.
26Walsh J, Harris B, Tayyaba S, Harris D, Smith P. Student-written single-best answer questions predict performance in finals. Clin Teac 2016;5:352-6.
27Grainger R, Dai W, Osborne E, Kenwright D. Medical students create multiple-choice questions for learning in pathology education: A pilot study. BMC Med Educ 2018;18:201.