English Language Learners’ Perception and Motivation Towards Exam Format: A Qualitative Study

semi-Abstract The present study sets out to fill a gap in research about the impact of assessment methods on EFL students in the Moroccan context. The objective of this study is to explore the perceptions of students at the English department at Ibn Tofail University in Morocco towards exam format, and the motivation behind their preferences. In order to achieve these objectives, a qualitative methodology was employed. The subjects of this study are sixteen undergraduate students currently enrolled at the English department at Ibn Tofail University (ITU)

computerized exams with machine-readable response sheets.These exams are often used so that students can demonstrate knowledge and command of content.Students are required to quickly recall and apply the information.Written exams can either consist of short-answer questions, essay questions, or a combination of an essay question and short-answer questions.Written exams are designed to test not just how well you understand the subject but also your memory for facts.Short-answer questions are typically composed of a brief prompt that demands a written answer that varies in length from one or two words to a few sentences.This type of question format aligns with a common student perception of assessment, which emphasizes rote memorization (Mosquera & Meléndez, 2021)." They also have a simple, straightforward structure (Cunningham, 1998).Essay questions provide a complex prompt that requires written responses, which can vary in length from a couple of paragraphs to many pages.Like short-answer questions, they provide students with an opportunity to explain their understanding and demonstrate creativity (Ward & Murray-Ward, 1999;Cunningham, 1998).Positive marking or grading means you are awarded for correct answers but not punished for incorrect ones because no marks are deducted from your overall grade.This gives some students the ability to guess when they do not know the correct answer (Holzinger et al., 2020).
In the past couple of years, universities around the world have been forced to switch to distance learning and testing.With this shift in the medium of instruction and examination, other changes had to be made.The changes imposed by the pandemic, coupled with the increasing number of enrolled students and the ratio of students per staff, forced certain universities to opt for the MCQ format over the more traditional written format.The MCQ format was chosen mainly because it is easier and less time-consuming to evaluate; however, concerns about the effect that reliance on MCQ exams alone can have on students' writing skills, critical thinking, and motivation have been raised by both students and faculty.
This research study has been motivated by a handful of reasons.The recent pandemic has accelerated the integration of technology in education, a process that was already taking place due to technological advancements and globalization.The nature of the MCQ examination format makes it perfectly integrable with a computerized or semicomputerized testing environment.Additionally, studies have shown that MCQs can enhance student engagement, learning outcomes, and overall discipline (Ahmed et al., 2020).In addition, the fact that MCQ exams are less time-consuming and require less manpower since they can be graded electronically makes them the preferred method of examination for overcrowded universities or universities with a low number of faculty or staff.However, there has been a debate about the effectiveness of MCQ exams and the negative effect they can have on students' writing skills, among other things.Thus, this study aims to explore students' perceptions, preferences, and motivations toward different exam formats.
In the same manner that a force moves a certain object, motivation is a force that moves, guides, and strengthens human behavior toward a specific goal (Saville-Troike, 2006).Motivation for humans plays the same role that an engine plays for a machine.It powers and directs human behavior.In addition to guiding human behavior, motivation also helps people choose the most appropriate actions to attain their goals (Ochsenfahrt, 2012).In the present study, motivation refers to the reasons students prefer one exam format over another.Perception refers to the way people regard or understand something in the real world.It is their view of reality.However, sometimes people's interpretation and perception can be significantly different from the truth (Lindsay & Norman, 1977;Altman et al., 1991).In the present paper, the term perception will be used to refer to how students feel and think about certain exam formats.
The present study is conducted at the Faculty of Languages, Letters, and Arts in Kenitra, Morocco.Its goal is to describe the problems that Ibn Tofail University (ITU) EFL students encounter with the different exam formats used at the university.To accomplish that, this research paper is going to be divided into two main parts.In the first part, the researcher reviews the literature and discusses previous research that was done on this topic.The second part deals with the methodology used in this study and the presentation and analysis of the findings.This study aims to shed light on the issues with the different exam formats used at ITU and explore students' perceptions and motivations towards each exam format.The following are the study's objectives: Identify learners' perceptions concerning the effectiveness of each exam format in accurately assessing and evaluating their competence.Determine which exam format is preferred by ITU EFL learners and what motivates their preference.The present research study tries to answer the following research questions: Which exam format do ITU EFL students perceive as a better measure of their competence?Why do EFL students prefer a specific exam method?

LITERATURE REVIEW
In terms of measuring quality, validity and reliability should be considered for any type of exam.A good exam must be both valid and reliable.Validity is the most important factor when developing and evaluating tests and exams.It refers to the degree to which the interpretations of students' answers for a specific assessment are supported by evidence and theory (American Educational Research Association et al., 2014).The validity of an assessment method is commonly supported by three different types of evidence.Content validity refers to the extent to which a student's responses to a given exam reflect their knowledge of the content area being tested.Construct validity is the exam's ability to measure the constructs that it claims to measure.Finally, criterion validity is the extent to which the results of an exam systematically relate to current or future events (Moskal & Leydens, 2000).Face validity is another type of evidence that should be considered; it refers to the appearance of an exam and whether it actually measures what it appears to measure.While face validity is not considered validity in the technical sense, it is still important for an exam to appear valid to students.According to Lyman (1998), face validity increases students' motivation towards the exam.Students will either respond positively or negatively to exams based on their perception of the connection between the exams and the content and objectives of the course.This either increases or decreases the exam's reliability.
Exam reliability refers to the consistency of exam scores.For instance, on a reliable exam, a student would receive the same grade regardless of when the exam was taken, when the answers were scored, and who scored them.Validity and reliability are not all-or-nothing arguments; they both exist in terms of degrees.After examining all the evidence, a judgment is made about the degree of validity and reliability of an exam.The reliability of essay scoring is constantly being questioned.In cases where there are no official grading keys, the grading behavior of essay graders can be influenced by many factors, such as the language, organization, and length of components.Furthermore, computer-assisted essay scoring has been shown to be highly capable of emulating human ratings of essays.Nonetheless, the main objective of essay exams is to determine whether the students' have an in depth understanding of the content (Brown, 2010).Multiple-choice tests are generally considered to be more reliable than essay exams because they test broad content and have very high scoring consistency.However, guessing on MCQ exams, whether it is done randomly among all choices or by eliminating some distractors first, limits MCQ exams' reliability to a great extent (Zimmerman & Williams, 2003).
One of the most well-known ranking order of cognitive educational objectives is Bloom's Taxonomy (Bloom et al., 1956), which had a significant and concurring impact on teaching theory, curriculum development, and assessment methods.Bloom's Taxonomy consists of six cognitive levels of thinking; they can be cited as follows: knowledge, comprehension, application, analysis, synthesis, and evaluation.MCQ exams do a fairly decent job of testing students' understanding at levels of knowledge and comprehension because they offer a reliable and efficient way to measure understanding.They can be used to measure simple achievements related to application, analysis, or evaluation as well.However, MCQ exams cannot test synthesis.
MCQ exams are often preferred by faculty for their ease of construction, time efficiency, and the fact that they can be machine graded.This allows instructors to regrade exams when necessary and create multiple versions of the same exam.This enables greater control over cheating and makes the exams more accurate.The MCQ exam can also cover a broad range of content in a short period of time (Walstad, 2006).They also offer a very efficient way to collect and grade exam papers in the case of classes with a large number of students.Moreover, MCQ exams facilitate allusion to the correct answer in textbooks, and they provide timely feedback.MCQ exams were found to reduce self-reported test anxiety (Tan, 2013).From the students' point of view, MCQ tests are objective and avoid instructor bias.They give the ability to earn partial credits in the case of slow test takers, and they provide the option to guess correct answers in the case of positively marked exams (Simkin & Kuechler, 2005).In the same vein, a study by Vegi et al. (2022) found that the majority of the students had a positive opinion about MCQs.
In the case of MCQ exams, even when the students answer a question correctly, the instructor does not get a clear idea as to the extent of the students' understanding of the concept being tested.MCQ cannot show how well versed a student is in a specific topic or if they have the necessary skills or sufficient knowledge to explain the same topic in a written format.MCQ exam feedback is usually unhelpful because students often simply get to know if their answers were correct or not rather than be shown the process that was used to arrive at the answer (Paxton, 2000).Many scholars note that MCQ exams do not give students the opportunity to organize, synthesize, argue coherently, or demonstrate creative and critical thinking.Unlike written exams, MCQ exams deny students the chance to express their knowledge in their own words.Many scholars believe that these skills are demonstrated better on essay exam formats (Tuckman, 1993;Lukhele et al., 1994;Bridgeman, 1992).According to constructivism learning theories, unconnected facts learned in a decontextualized manner disappear from learners' memory since they lack meaning and do not fit into learners' mental conceptualization.Thus, knowledge tested in MCQ exams will most likely be forgotten since test recall is less likely to transfer to long-term memory (Resnick & Resnick, 1992).Some of the major advantages of written exams, according to the literature review, are that they take less time to construct than MCQ exams and are significantly less labor-intensive to create.According to Tuckman (1993), it is possible to employ essay-question exams in order to assess higher cognitive processes like analysis, synthesis, and evaluation.The essay or paragraph writing exam allows the student to demonstrate original and critical thinking and to choose the knowledge they want to use, organize it, integrate it, and express their answer in a cohesive and coherent written format.Moreover, because MCQ exams limit students to naming and identifying, actions such as describing, demonstrating, and explaining need the exam format to be less structured.Another reason written exams tend to be preferred is their tendency to be very difficult to answer correctly by guessing.Piontek, M.E. (2008).
According to Walstad (2006), one of the biggest advantages of essay exams brings about its most serious disadvantage.Essay exams' ease of construction can result in unreliable assessment when exam questions are not clear.In these cases, correctors might be forced to correctly construe students' responses.Another disadvantage of essay exams is the time requirement.While grading essay exams, professors and instructors are sometimes required to read the answer more than once.In the case of large classes, this task is near impossible for one person to complete unless they have a grader, which raises concerns about inter-grader reliability.Besides, written exams cannot cover the same amount of content and knowledge that MCQ exams cover in the same amount of time.According to research from Ashburn (1938), the greatest disadvantage of written exams is the large amount of time required to grade them.This task is near impossible for instructors who do not have graders to help them.In addition, written exams expose graders to complaints of unfair evaluation or bias.The time commitment imposed on the graders also raises a cost-benefit debate, especially in institutions that focus heavily on research.
The literature shows that students' preference for a specific exam format seems to have some validity.Research from Kennedy and Walstad (1997) has shown that a small but significant number of students might receive grades that are not on par with their true ability.This applies to pure MCQ and essay exams, as well as hybrid exams of the two formats.A previous study that was conducted by Brigui (2017) found that 56.6% of ITU EFL students prefer MCQ exams, while 22,7% prefer essay and short paragraph writing, and 20,7% prefer short answer questions.The same study found that 51.3% of the students who participated think that MCQ exams are easy to prepare for, and 44.7% think that they yield higher grades.The study also proved that a correlation exists between students self-assessed proficiency and their exam format preference, and that exam format affects students' study approaches.Students with low proficiency prefer MCQ exams, while more proficient students prefer written exams.It is also important to note that a study by Edele et al. (2015) found that self-assessment of language proficiency cannot be used interchangeably with test-based assessment, and a clear distinction between the two concepts is essential in order to obtain valid information on language skills; however, other studies found that advanced students' self-assessment was aligned with their subsequent language performance and that self-assessment yields uniform outcomes across various contexts (Brantmeier et al., 2012;Ross, 2006).Another study by Scouller (1998) found that students' employed surface learning strategies for MCQ exams and deep learning strategies for essay exams.The study also found that students' perceptions were that MCQ exams assess surface cognitive levels of thinking and that essay exams assess higher levels, such as analysis.Past research has also suggested that male students have an inherent advantage on MCQ exams because female students are less likely to guess the answer to a question when they are not sure of the correct answer (Tan, 2013;Bell & Hay, 1987;Lumsden & Scott, 1987;Bolger & Kellaghan, 1990;Bridgemand & Lewis, 1994;Holzinger et al., 2020).Additionally, numerous studies found that the characteristics of exams impact student performance differently based on gender, with female students preferring exams with essays and short answer questions and male students preferring true or false questions (Wright et al., 2016;Kelly & Dennick, 2009;Hess et al., 2013;Aldrich et al., 2018).

METHODOLOGY
The target subjects of the present study are Moroccan EFL students at the department of English Studies at the Faculty of Languages, Letters, and Arts at Ibn Tofail University in Kenitra, Morocco.Age is an irrelevant variable in this study; however, self-reported proficiency level is important, and gender was considered as well.Participants are undergraduate students, first-year students (N.1), second-year students (N. 2), third-year students (N.13), and students who recently graduated (N.3).This decision was made in order to obtain a sample that is more representative of the population.Female respondents constituted 53% of total respondents (N.10), while male respondents constituted 47% (N.9).
A new questionnaire was specifically designed for this study in order to collect the information required to get a deep understanding of the students' preferred exam format, the motivation behind their preference, and their perceptions of different exam formats.The questionnaire was drafted based on the research objectives.It is composed of two parts.The first part is concerned with basic information such as gender, current year of study, and level of proficiency on the Common European Framework of Reference for Languages (CEFR).In the third question, students were asked to describe their language proficiency level by choosing between intermediate, upper intermediate, advanced, and proficient.Beginner and elementary options were not included as the questionnaire was only given to English studies students, and it was assumed that their language proficiency level would be higher than elementary.Due to a lack of resources, no tests were given to students.The second part is composed of two sections, each of which is intended to answer one of the research questions.The first section comprises a set of semi-open-ended questions that are used to collect data about students' exam format preferences and the motivation behind such preferences.Respondents were asked questions such as: Can MCQ exams be used to evaluate writing competency?Do you think MCQ exams give you an advantage or a disadvantage?Respondents were asked to justify their answer after each question.The second section comprises a set of open-ended questions that are used to collect data about the students' perceptions of different exam formats.Students were asked questions such as: How would you describe the difficulty of MCQ exams in comparison with written exams?How does the exam format affect your learning habits and methods?How much time do you spend studying and preparing for a written exam?Respondents were asked to answer the questions by writing a short paragraph, which allowed them to express their views on various exam formats.
Open-response questions prompt the participant to compose a coherent, short answer that is no more than a paragraph long.Since they do not have a right or wrong answer, they allow participants to provide answers using their own words.Open-response questions are considered a very good instrument to illicit a very expansive and unpredicted response in an unstructured manner, which allows researchers to have a deeper understanding of what participants think about a particular issue.Therefore, they are considered to be best suited for exploratory research (Heigham & Croker, 2009).The questionnaire was emailed to a list of students enrolled in the English department at ITU, and a number of responses were received (N.19).
A qualitative methodology was employed in this study, which is an exploratory ex post facto design.Thematic investigation, a common strategy for qualitative analysis, involves the use of a coding system to arrange information in a way that allows researchers to derive logical and understandable inferences from the collected data (Suter, 2012).Therefore, the findings of the present study are analyzed using a semantic-inductive thematic analysis.In doing so, indepth knowledge about students' views, perceptions, motivations, and experiences is discerned.After receiving the data, the researcher familiarized himself with the data by reading through the answers and noting down his initial impressions.After the familiarization, the data was coded using the qualitative data analysis and research software "Atlas.ti."Quotations were compiled into sets of codes that were later examined and used to generate prevalent themes in the data.The generated themes presented recurring patterns in the respondents' answers.Finally, the themes were reviewed, defined, and named.

FINDINGS
After familiarization and coding, a series of themes were generated.Consequently, the generated themes were reviewed, and the final themes were defined as learning and skill development; assessment fairness, clarity, and reliability; study strategies, habits, and time management; challenges and anxiety; and MCQ exam limitations.

4.1.Learning and Skill Development
The main motivation behind respondents' preference for essay writing and short answer exams was that these types of exams motivated them to practice their writing skills and enrich their English lexicon.A male third-year student said, "I prefer essay exams because they allow me to improve and evaluate my writing competency."Another reason for the essay exam preference was students' belief that written exams require the use of higher cognitive levels of thinking and deep learning strategies.A female third-year student said, "Since I am studying at college, essay writing exams are good because they allow me to practice and improve my writing style.Another reason is that the essay writing exam is practical; it enables me to apply knowledge, analyze data or situations, synthesize, or evaluate." When it comes to hybrid exams, respondents wrote that hybrid exams might be a more efficient option as they can test their writing, evaluation, critical thinking, and synthesis, as well as their knowledge and command of the subject.One third-year student wrote, "Hybrid exam will effectively help me develop my knowledge and writing skills."Furthermore, they wrote that it covers more course content than written exams and gives them the incentive to practice their writing skills that MCQ exams do not, with one student stating, "It could be very effective because the mcq part would assess more knowledge and the written part would allow professors to assess the students' writing skills."

4.2.Assessment Fairness, Clarity, and Reliability
Respondents who preferred hybrid exams were motivated by the fact that they test both higher and lower cognitive levels of thinking and because they can test a broad command of knowledge and writing skills at the same time.A female third-year student responded, "A hybrid exam could evaluate both knowledge and writing skills at the same time."Another female second-year student said," A hybrid exam allows the professor to test different skills and gives students a chance to answer more questions.".
Participants who preferred MCQ exams perceived them to be clear and left little room for misunderstandings.A male first-year student said, "MCQ questions are clear and straight to the point, less chance being off topic."The same student stated, "MCQ exams also give you a chance at a fair grade compared to essay exams where you get one chance to answer one question correctly."This sentiment was shared by other students who cited the fact that MCQ exams can be machine-graded as a motivation behind their preference.A male second-year student explained his preference for MCQ exams, stating, "For one reason, the questions are easy to mark and can even be scored by computer."Another female third-year student justified her preference by stating," They are corrected by a machine."Machine grading also eliminates instructor error or bias and allows students to get their results in a timely manner.The same student stated, "If the exam, for example, is corrected by different professors, there can be no parity chances for all students because correction method may vary from one professor to another.Furthermore, the time allowed to professors for correction may affect if it is insufficient." Another male second-year student stated, "I trust computer-graded exams more, and I do not trust exams graded by graders other than the professor who gave the lecture and constructed the exam.".Respondents perceived that MCQ exams covered more content, whereas written exams covered deep and profound understanding of the subject.A third-year male student stated, "Well, MCQ exams only require choosing the correct question; only your knowledge is assessed, but when it comes to writing, everything is assessed, starting from your knowledge accuracy, writing style, format of writing, and also your creativity.".These perceptions match previous research on MCQ exam preference, reliability, and coverage (Vegi et al., 2022;Walstad, 2006;Simkin & Kuechler, 2005).
In the same vein, another third-year female student replied, "MCQs are poor tools for measuring the ability to synthesize and evaluate information, apply knowledge to complex problems, or solve problems."Some respondents reported that they get better grades in MCQ exams thanks to the objectivity of the format and because it allows students to guess when they are not sure of the correct answer.One student stated, "Mcq exams can be the more objective exam format in terms of marking concerning exam correction.Also, students can score points with a lucky guess, unlike the writing format exams.".

4.3.Study Strategies, Habits, and Time Management
When asked about how much time they spent studying for exams, the majority of respondents reported that they spent less time studying for MCQ exams.A female third-year student wrote," I spend the time that can allow me to cover all course's points.And it takes more time for preparation in comparison to the mcq exams."However, some respondents said they spend the same amount of time studying for both MCQ exams as they do for written exams.Concerning learning strategies and habits, respondents wrote that when studying for MCQ exams, they just have to understand and categorize the information presented to them.One second-year student wrote, "I focus more on details like names and dates when studying for mcq exams."However, when preparing for written exams, students said they have to make sure they can reproduce the information and practice their writing.A second-year student wrote, "If the exam is not MCQ, I make sure I am able to reproduce the answers, not only to categorize information in my mind."One respondent wrote, "I prepare for exams by taking mock exams or exams from previous years, so the exam format decides what material I'm going to use while preparing for the exam."Essentially, exam format dictates what material they will be using to prepare for the exam.Some students wrote that they only focus on one or a few chapters from the course when preparing for essay exams since they will be required to answer only one of two or three questions; however, for MCQ exams, these students said that they have to cover all chapters.

4.4.Challenges and Anxiety
When it comes to the challenges they faced with each exam format, students wrote that written exams appear to be difficult because they focus on deep understanding rather than simple knowledge.Apart from this, a few respondents said that MCQ exams can have difficult vocabulary items and require more attention to details.One third-year student wrote, "In MCQ exams, I found difficulty in vocabulary," while another second-year student wrote, "For mcq exams, one should memorize all the details, so it needs more time."Another respondent wrote, "Sometimes not knowing the correct answer could discourage you and raise your anxiety level for the remainder of the exam."Similarly, some students believe that MC questions can have distractors that are too similar to the correct answer, thus misleading them and raising their anxiety level.One third-year student wrote, "In MCQ, by chance you can pass, and sometimes similarity in choices may lead you to make mistakes."Another third-year student reported, "The suggested responses to some questions look similar and confusing.We can't easily choose the right answer.".This contradicts previous research, which found that MCQ exams reduce self-reported test anxiety (Tan, 2013).

4.5.MCQ Exam Limitations
Respondents wrote that MCQ exams cannot evaluate writing skills properly.One second-year student wrote, "MC questions only show that a student can recognize well-written text, but they do not show if they can produce it."For the same reason, respondents said that MCQ exams cannot properly assess competence in subjects like composition, translation, research methodology, semantics, pragmatics, and intercultural communication.One female third-year student wrote, "Modules like methodology should be written, and others like public speaking should be oral."Another second-year student wrote, "Composition, discourse analysis, semantics, pragmatics, any subjects that require critical thinking analysis, logical justification should be written.".
A few female respondents recognized guessing in MCQ exams as a disadvantage.One female third-year student said, "Mcq exams can be the more objective exam format in terms of marking concerning exam correction.Also, students can score points with a lucky guess, unlike the writing format exams."Another female second-year student stated, "In MCQ exams, the choices sometimes are too similar, which makes me confused.In the end, I count on luck, which is a bad thing."Other male respondents said it was an advantage.One male second-year student stated, "MCQ exams give you a limited number of choices to think about and choose from, which is already a helpful start.".These findings are consistent with previous findings that female students have an inherent disadvantage when it comes to MCQ exams (Tan, 2013;Bell & Hay, 1987;Lumsden & Scott, 1987;Bolger & Kellaghan, 1990;Bridgemand & Lewis, 1994;Holzinger et al., 2020).
Other perceived disadvantages were a lack of feedback about students' strengths and weaknesses, as well as forcing students to memorize a lot of details.One recent graduate wrote," I have to remember too many details."Another female third-year student wrote, "They make us lazy and depend only on memorizing."Moreover, respondents believed that taking MCQ exams as the sole method of assessment would negatively affect them.Specifically, this will discourage them from practicing their writing skills and force them to focus on surface learning only.One female third-year student replied, "Well, they encourage memorization of terms and details, so they affect our understanding of the content that remains superficial."

CONCLUSION
All of the students who participated in this study believe that MCQ exams cannot properly assess writing skills, or competency in certain courses.Accordingly, some students believe that the MCQ exam should not be the sole method of assessment.While taking MCQ exams will yield higher grades, it will negatively impact them in the long run.Regardless of their exam format preference, ITU's EFL students seem to be very well aware of the advantages and disadvantages of each exam format.The MCQ exam preference is mainly motivated by the practicality of this format, while the written format preference is mainly motivated by the ability of this format to encourage deep learning, enable higher cognitive levels of thinking, and improve writing skills.Some students perceive hybrid exams to be a better compromise between MCQ and written exams; thus, more research is required since this exam format is not widely adopted at ITU.

5.1.Research Limitations and Recommendations
Due to time and resource constraints, this researcher had to rely on the questionnaire as the sole instrument of data collection for this study.An unstructured interview would have provided better insight into students' views on exam format.Similarly, piloting the questionnaire would have added more reliability to the study.The present study was conducted to explore the current research questions on a minimal scale as well as to highlight some of the surface issues related to exam format that plague language assessment in higher education in the Moroccan ITU context.The methodology employed in the current study is unable to provide clear details about exam format-related issues faced by Moroccan ITU students, as it only provides brief descriptive results.It is hoped that future research studies relevant to the current study will incorporate mixed methodology on a larger scale and incorporate quantitative data findings to provide a more precise conclusion.

Appendices
A. Questionnaire Written exams: written exams can either consist of short answer questions, essay questions, or a combination of an essay question and short answer questions Hybrid exams: hybrid exams consist of a combination of two parts.The first part is an MCQ, and the second part consists of short answer questions or short paragraph writing.Section 1: learners' exam format preference.Please provide a detailed answer for the following questions.

A
Part 1: Basic information Please put an "X" in the appropriate box.

1 .
Which exam format do you prefer? MCQ exam. Essay writing exam. Short answer questions exam. Hybrid exam.Why: 2. Did factors like correction method and correction time affect your exam format preference? Yes. No. Justify your answer: 3. MCQ exams cannot properly assess your competence in certain subjects. Yes. No.If yes identify which subjects: 4. Can MCQ exams be used to evaluate writing competency. Yes. No. Justify your answer: 5. Do you think MCQ exam give you an advantage or a disadvantage.