Assessing Outcomes of Simulation in Communication Sciences and Disorders

Assessment is an ongoing process that is necessary at every stage of designing, implementing, and evaluating simulation-based learning experiences (SBLEs). Designing and aligning a high-quality assessment process provides instructors and researchers with valuable data to understand if students have met the desired simulation learning objectives, where students are in their learning, and opportunities to enhance the SBLE. This reflection discusses the importance of assessing student learning outcomes as well as the effectiveness of all simulation-based learning experiences (SBLEs) in communication sciences and disorders (CSD). While the benefits and effectiveness of simulation have been demonstrated in other health professions, simulation research is in its beginning in CSD. Building the evidence to inform systematic integration of simulation into CSD curricula and to further best practices in our field is essential. Further, to advance the Scholarship of Teaching and Learning (SoTL) in simulation, researchers and instructors must disseminate their findings, measures and tools, assessment processes, and even simulation scenarios. Dissemination serves to enhance evidence-based education practices and further validate the assessment processes we are using to ensure quality simulation.

Just as we teach the importance of evaluating the evidence behind assessment and treatment approaches used with patients; it is equally important that we practice this diligence with our instructional methodologies (Ginsberg et al., 2017). As educators of future professionals, this applies to all of us. Our pedagogical decision-making should be guided by evidence-based education, which serves as a conduit to evidence-based practice for our students (Dalton et al., 2017;Ginsberg et al., 2017). This article focuses on the importance of assessing learner outcomes of simulation-based learning experiences (SBLEs). While simulation is a powerful educational tool that exploits the benefits of active learning pedagogy and seeks to achieve deep learning objectives that transfer to other clinical environments, SBLEs can be time and resource intensive (Weller et al., 2012). Therefore, it is prudent for academic programs to consider if resources are being used in ways that maximize efficiency and learner outcomes.
Assessment of student performance and the rigorous evaluation of our SBLEs are essential to demonstrating effectiveness of simulation and to support systematic and strategic integration of SBLEs into our already crowded curricula. Assessing individual student performance, along with gathering data about specific aspects of every SBLE, should be used to evaluate the effectiveness of our simulations and simulation programs. Suskie (2018) advocates that rather than talk about good or bad quality assessment tools, we as educators must consider designing a quality process. " [E]ffective assessment practices…yield evidence of student learning used to inform meaningful and substantive changes to teaching and learning" (Suskie, 2018, p. 25). Both assessment of individual student achievement and evaluation of the SBLE provide feedback that can serve to better understand where students are in their learning, and also inform enhancement of the SBLE. The International Nursing Association for Clinical Simulation and Learning (INACSL, 2016) provides guidance for best practices in simulation that is applicable for all simulationists designing and implementing high-quality, evidence-based simulations. One of the INACSL guiding documents, and essential component of best practices, is Participant Evaluation. Under this guideline, all simulations must include a planned evaluation which may be formative or summative, or even high stakes (INACSL, 2016). The use of evaluation in every SBLE is essential for accurately assessing success in meeting learning objectives, objectively examining the effectiveness of simulation, and supporting student learning through quality SBLEs. A wellplanned and intentional process of assessment of learners' performance and evaluation of the degree to which the objectives of the SBLE were met are essential components of quality simulation .
The purpose of this reflection is to review and introduce assessment of student learning outcomes with SBLEs. Establishing a well-designed plan of assessing student achievement of learning objectives and of the SBLE is essential to understanding the impact of SBLEs on student learning and for developing quality, effective simulations. While evaluation of the SBLE experience will not be discussed specifically, instructors and researchers are encouraged to include opportunities for participants to evaluate the SBLE in terms of logistics, aspects that supported learning, and opportunities for improvement so that future SBLEs may benefit. This paper will address assessment of student outcomes which informs the effectiveness of the SBLE in meeting objectives. Further, readers are encouraged to consider challenges facing the field including development of valid assessments, assessment processes, and dissemination of findings that strengthen the evidence supporting simulation in CSD.
Like all quality educational activities, clearly defined learning objectives guide the assessment process and must be part of design and planning of SBLEs from the beginning. Quality assessment is driven by the objectives of the SBLE and should consist of data from multiple sources (e.g., tests, surveys, checklists, qualitative feedback) because a combination of quantitative and qualitative data provide us with a deeper insight into student learning (Dalton et al., 2017). Evidence from nursing supports clinical learning through simulation and transfer of that clinical skill learning and theoretical knowledge to the clinical environment (Hayden et al., 2015;Weller et al., 2012). Evidence of simulation effectiveness is still limited in communication sciences and disorders (CSD), but is growing as simulation has gained acceptance in the field and more researchers and instructors disseminate their findings.

A Framework for Assessing Student Learning in Simulation
Interest in the use of simulation in speech-language pathology and audiology education is arguably on the rise. In order to support this growing interest in simulation in CSD, the Council for Academic Programs in Communication Sciences and Disorders (CAPCSD) published an electronic book on simulation best practices that provides an overview of the simulation process and key considerations when designing and implementing high-quality simulation experiences . However, while interest and research in the use of simulation in CSD has increased, there continues to be a dearth of evidence in the speech-language pathology and audiology literature that addresses methods and tools for the assessment of simulation outcomes.
Development and validation of measures demonstrating the impact of education interventions, such as simulation, is essential. Such measures will contribute to our confidence as educators that simulation is an effective approach to clinical teaching and that the simulations we are designing and implementing are well-designed, quality SBLEs. Other fields have demonstrated simulation efficacy in increasing knowledge, skills, confidence, and behaviors of students, but there continues to be challenges in establishing valid and reliable approaches to assessing outcomes (Lee & Oh, 2015). Dudding et al. (2019) summarize assessments as formative or summative based on their timing and purpose. These types of assessments are not dissimilar to those we conduct through the progress of our didactic courses and serve to monitor student progress toward meeting course objectives and/or competencies. Specifically, assessments should directly measure the learning objectives of the SBLE so that we can critically examine the SBLE and determine that student learning is a result of the learning experience (McKinney, 2012). However, what we should be assessing to ensure quality outcomes and how we should measure those outcomes continues to be an area of deficiency in the literature.
A frequently applied framework for evaluating student learning outcomes in simulation is Kirkpatrick's model of learning (Johnston et al., 2018;Yardley & Dornan, 2012). This model details four levels of evaluation that can guide assessment in simulation: Reaction, Learning, Behavior, and Results (Weller at al., 2012). Much of the early simulation research in other health professions, and CSD, focused on the assessments of student perceptions and experiences in simulation (Reaction). Student perceptions, and even instructor perceptions, are essential to evaluating what is working and what is not working with simulation, the learners' values, and buyin of those involved in the process. As more and more research investigating simulation in CSD is published, there is increasing evidence of student knowledge and skills that develop through simulation (Learning) (Carter, 2019;Clinard, 2018;Dzulkarnain et al., 2015;Ferguson & Estis, 2018;Kaf et al., 2013;Miles et al., 2016). Assessments demonstrating learning contribute to the evidence supporting simulation. Research evidencing higher levels of transfer of learning (Behavior and Results) are more difficult to measure because this reflects changes in student behavior in the clinical environment (Behavior) and the impact on patient outcomes (Results). Continued effort across audiology and speech-language pathology to grow the evidence by evaluating all levels will further strengthen our confidence in incorporating SBLEs in our programs.

Learner Outcomes to Measure
Research across health fields has demonstrated that SBLEs can positively influence student attitudes, knowledge, and skills (Boling & Hardin-Pierce, 2016;Frenzel et al., 2018;Hayden et al., 2015;Nestel et al., 2017;Watson et al., 2012). Further, studies have also assessed self-efficacy (confidence) and clinical decision-making (Boling & Hardin-Pierce, 2016;Macauley, 2018;Watson et al., 2012;Woda et al., 2017). Different types of assessments can be aligned with Kirkpatrick's levels of evaluation. By evaluating across levels from Reaction and Learning to Behavior and Results, we can strengthen the evidence supporting the effectiveness of simulation in supporting student learning and development of clinical skills and behaviors that positively impact the individuals receiving their services. Instructors and researchers designing SBLEs must consider their objectives and select assessments that contribute to our growing scholarship of teaching and learning (SoTL) in simulation literature. Outcome measures evaluating student achievement of SBLE objectives evidences student learning and can support the effectiveness of simulation. This section will examine a range of types of outcomes that can be measured and used to examine student learning and SBLEs.
Attitudes. Among the most common areas of evaluation of simulation is in the area of student attitudes and perceptions of learning, as well as satisfaction in SBLEs. Surveys and measures of student perceptions provide educators with valuable insight into student experiences, which can motivate ongoing improvement of the SBLE and inform integration of simulation into curricula. Attitudes may provide data regarding buy-in of both students and clinical educators, faculty, and administration as well as how engaged students are in the learning process. Surveys of attitudes and experiences indicate that CSD students enjoy simulation and identify SBLEs as beneficial (Berkowitz, 2017;Clinard & Dudding, 2019;Miles et al., 2016;Moineau et al., 2018;Ward et al., 2014). This is consistent with SBLEs in other health professions which also report high levels of student satisfaction.
Attitudes and perceptions are most often measured through surveys with Likert scales, qualitative questions, focus groups and interviews. Researcher developed surveys are commonly reported measures of student perceptions (Lewis et al., 2018, Moineau et al., 2018Ward et al., 2015). Validated surveys also exist and can be found in the nursing literature. For example, Clinard & Dudding (2019) utilized the Educational Practices Questionnaire (EPQ) in an investigation of student perceptions of computer-based simulation (National League of Nursing, 2005). The EPQ provided valuable information about students' perceptions regarding their learning experiences and their perspective regarding what is important to their learning. This measure was also used by this researcher to assess student perceptions of high-fidelity human patient simulation; however, its usefulness is limited given typically highly favorable response rates which may not accurately discriminate between the strengths and weaknesses of the SBLEs (Clinard, 2018). The Simulation Effectiveness Tool-Modified (SET-M; Leighton et al., 2015) is another example of a validated tool from nursing that gathers student perceptions of learning in simulation. The benefit of this measure is that it specifically targets the essential components of simulation (prebriefing, simulation, and debriefing). Specific feedback on each component is particularly informative because it highlights the effectiveness and potential need for modification to these components to best support student learning.
When measuring student attitudes or the experiences of those involved in simulation, gathering multiple sources and types of information is critical for providing a robust picture of the experience. The SET-M includes an open-ended question prompting students for more comments. These data can be rich sources of understanding. Conducting research examining the strengths and weaknesses of computer-based simulation, we also engaged students in focus group interviews in order to hear the students' voices and their stories about what they experienced (Clinard & Dudding, 2019). Yet another approach is reflective journaling, which can engage students in a process of deepening their own learning and critical thinking as they reflect on the process. Journal entries also provide a rich text that can be analyzed for student perceptions, changes in thinking and behavior resulting from simulation, and learning in simulation. Rigorous qualitative data analysis of these open-ended responses and reflections can provide context to student ratings and enrich our understanding of their experiences (Bussard, 2015(Bussard, , 2017Miller, 2017).
Knowledge. Beyond measures of perceptions, there are no validated measures of knowledge, outcomes, or skills in the CSD literature. Knowledge assessments are typically designed by researchers and instructors based on the specific content being taught and targeted via simulation. These knowledge assessments are often administered pre-and post-simulation in order to demonstrate knowledge gains. Like other assessments of learning we often administer in our courses, assessments may include items such as multiple choice, matching, short answer (Clinard, 2018;Ferguson & Estis, 2018). Instructor or researcher-created tools have benefits because they are designed specifically to align with the SBLE objectives and are specific to the learning outcomes of a given activity.
While knowledge assessments must be aligned to objectives and are specific to a particular SBLE, we can increase validity of these instruments by having experts evaluate the tool to ensure that it is reliable and will obtain the data that is being sought. It is also critical that we as researchers and instructors share these tools. Through sharing measures, we can broaden use of these tools and further evaluate application to other populations and circumstances. Replication of measures with multiple groups of students, across institutions, and SBLEs will further enhance evidence of the quality of our simulations and their effect on student learning.

Self-Efficacy.
Beyond requisite knowledge and clinical skills, Bandura's (1997) theory of selfefficacy suggests that confidence and self-efficacy are critical to skill development and performance. Self-confidence, one's belief in their ability to perform actions and skills, is also critical. By developing confidence in their knowledge and skills, students are then able to apply and generalize their knowledge and skills to future clinical situations and make clinical decisions in practice (Bandura, 1997). Further, consistent with Bandura's (1997) description of self-efficacy, students need to develop confidence and belief in their knowledge and abilities in order to function effectively in unpredictable and novel situations.
Also referred to as confidence in the literature, self-efficacy is often measured through surveys using Likert scale ratings of learners' perceived confidence in their ability to perform certain tasks. Like knowledge, there are no validated, standardized measures of self-efficacy. However, as I learned through researching high-fidelity simulation as compared to written case studies, students make significant gains in their confidence with hands-on skills through simulation that are not seen with students who participate in written case studies. The measure I used, like other confidence measures in the literature, was researcher-developed in order to measure responses specific to those areas being practiced through simulation (Clinard, 2018;Miles et al., 2016;Ward et al., 2014). A key factor to consider when developing rating scales of confidence or self-efficacy is to align scale items to the objectives of the SBLE and to consider what practical skills students are expected to experience, develop, or improve through participation in the SBLE. Taking outcomes and objectives into consideration will better ensure that the resulting data reflect the experience you designed and the learning you anticipated.

Clinical Skills & Decision Making.
Clinical skills are measured in nursing and medical simulation for routine psychomotor and procedural tasks. One example of how nursing and medicine implement clinical skills assessment is through Objective Structured Clinical Evaluations (OSCEs). These assessments are typically summative and often high stakes. OSCEs provide a standardized approach to ensuring that learners have met basic level of skill performance. Similarly, in audiology, outcomes of SBLE have been assessed through practical examination of clinical skill performance, such as performing otoscopy and making a diagnosis (Dzulkarnain et al., 2015;Kaf et al., 2013). Berkowitz (2017) also assessed students' clinical skills following an SBLE by demonstrating the steps to passing an endoscope. The assessment was graded, but reportedly students received cues (formative feedback) from instructors to perform tasks successfully (Berkowitz, 2017).
When considering how to measure the effectiveness of simulation, clinical skills may be measured through a variety of approaches and could be used formatively to discuss performance with students during their debriefing. Miles and colleagues (2016) measured clinical skills using vignettes that were developed by clinical educators and were piloted by experienced SLPs and registered dieticians. These vignettes were administered at three points in time and scored using a rubric (Miles et al., 2016). In a study by Ferguson & Estis (2018), clinical skills were measured using clinical judgment scores of infant feeding readiness and accurate documentation of clinical markers. These types of clinical skill measurements tap into the deeper level of learning and integration of knowledge and skills for application in a clinical context. Carter (2019) utilized the SimuCase™ Clinical Skills Inventory (SCSI), to measure clinical-decision making following computer-based simulation and an unpublished assessment of critical thinking, the Critical Thinking Test for Communication Sciences and Disorders, to assess generalized changes in students' decision making in unrelated clinical scenarios. As Carter (2019) acknowledged, neither content validity nor reliability had been established for either of these non-standardized measures and therefore conclusions were limited. However, continued application of such measures and rigorous examination of validity and reliability would serve to contribute to more standardized approaches to measuring outcomes of such SBLEs.

Dissemination of Findings: A Critical Next Step
We gain valuable insight from considering each of the factors that we can collect data on: attitudes, self-efficacy, knowledge, clinical skills, and decision-making. By gathering different types of evidence and examining them together, educators have a greater understanding of how simulation contributes to student development and learning. This understanding of the impact of a learning experience on students is among the most important roles of SoTL (McKinney, 2012). An essential element to moving beyond scholarly teaching and into SoTL is the dissemination of findings. It is essential that educators not only evaluate simulation learning opportunities and assess student outcomes, but then we must share these with the field through scholarly presentations and publications (Dalton et al., 2017;Ginsberg et al., 2017;McKinney, 2012). Sharing is the crucial element of SoTL -it is what will move the field forward. Dissemination of outcomes, measures, and scenarios will benefit the larger community of researchers and educators seeking to provide rigorous, well-designed, and validated SBLEs in order to best prepare students as clinicians.

Conclusion
Gathering and sharing diverse sets of data collected through a variety of methods advances our knowledge and skills as simulationists. These data provide evidence of how simulation supports student learning, which promotes evidence-based implementation and strategic integration of SBLEs into the CSD curriculum. As instructors and researchers begin to design simulation experiences, we all must take care to develop clear objectives and an effective assessment process to thoroughly examine student achievement and the success of the simulation. Assessment of SBLEs should occur at all stages of the process with data critically analyzed to inform use of simulation. Professionals need to share tools and approaches to the assessment process through collaboration across instructors, institutions, and disciplines to increase validation of measures and processes. In an effort to support those beginning to research and implement simulation, Table 1 includes examples of assessment tools and methods emerging in the CSD literature. Further, it is critical to be explicit in what every SBLE seeks to achieve, and design assessments which capture student achievement while also critically examining the SBLE for areas of potential improvement. Table 2 provides some valuable resources, beyond the references cited throughout this paper, for those beginning to design simulation and SoTL investigations. Finally, we all must strive to design assessments that examine higher levels of Kirkpatrick's model (Behavior and Results). By measuring the impact on transfer of learning to the clinical environment and the impact on patient care and outcomes we can truly measure learning as students transition into clinical practice.  Helpful resources for designing a quality assessment process

Disclosures
The author has no financial or nonfinancial relationships to disclose.