ASCB logo LSE Logo

A Mixed-Methods Investigation of Clicker Implementation Styles in STEM

    Published Online:https://doi.org/10.1187/cbe.17-08-0180

    Abstract

    Active learning with clickers is a common approach in high-enrollment, lecture-based courses in science, technology, engineering, and mathematics. In this study, we describe the procedures that faculty at one institution used when implementing clicker-based active learning, and how they situated these activities in their class sessions. Using a mixed-methods approach, we categorized faculty into four implementation styles based on quantitative observation data and conducted qualitative interviews to further understand why faculty used these styles. We found that faculty tended to use similar procedures when implementing a clicker activity, but differed on how they situated the clicker-based active learning into their courses. These variations were attributed to different faculty goals for using clicker-based active learning, with some using it to engage students at specific time points throughout their class sessions and others who selected it as the best way to teach a concept from several possible teaching techniques. Future research should continue to investigate and describe how active-learning strategies from literature may differ from what is being implemented.

    A MIXED-METHODS INVESTIGATION OF CLICKER IMPLEMENTATION STYLES IN STEM

    Successful teaching in science, technology, engineering, and mathematics (STEM) is increasingly defined by the use of evidence-based instructional practices (EBIPs). Active learning with personal response systems, also known as “clickers,” is one such practice that has become popular in high-enrollment undergraduate STEM classrooms. While there are some prescribed methods for using clickers that are effective for producing learning gains, such as peer instruction (PI; Mazur, 1997; Crouch and Mazur, 2001), research suggests that faculty do not necessarily follow the precise steps in these methods when implementing clickers in their classrooms (Vickrey et al., 2015; Dancy et al., 2016). Given this, we sought to examine some of the ways clicker-based active learning was implemented in high-enrollment, lecture-based STEM courses, and why faculty implemented clicker-based active learning in these ways. Furthermore, we sought to understand how faculty situated clicker-based active learning into their classes and balanced clicker use with other teaching techniques. Quantitative classroom observation data were paired with qualitative interview data to gain insight into how and why faculty implemented clickers in their courses. The mixed-methods approach taken in this study highlights the complexity of implementing active learning into lecture-based undergraduate STEM courses (e.g., Hora, 2015).

    Clicker-Based Active Learning

    Clickers are a versatile teaching and learning technology and are used across many disciplines (Caldwell, 2007; Han, 2014; Lopez et al., 2014). A common approach to using clickers in STEM is in an active-learning context, in which faculty members intersperse questions throughout lecture for students to think about conceptually, problem solve, or discuss. One specific approach to active learning with clickers is PI (Mazur, 1997). The PI method involves the following seven-step procedure: 1) the instructor poses a question; 2) students think individually; 3) students provide their responses (often using a clicker); 4) students discuss the question with nearby peers, trying to convince peers of the correctness of their answers; 5) students provide their responses for a second time (again, often with a clicker); 6) the instructor tallies the responses; and 7) the instructor explains the correct answer (Mazur, 1997). PI is known for its effectiveness for increasing learning (Crouch and Mazur, 2001; Smith et al., 2009; Vickrey et al., 2015)

    However, research suggests that faculty do not always implement PI or other EBIPs in the same way that the practices were evaluated and shown to be effective (Henderson and Dancy, 2009; Dancy et al., 2016; Turpen et al., 2010). Hence, it is important and worthwhile to document the specific implementations of PI and other EBIPs faculty use, so that the critical components of these teaching strategies can be determined (Borrego et al., 2013; Stains and Vickrey, 2017). This lack of adherence to published literature has resulted in a wide variety of ways that clickers can be implemented in a course. For example, faculty can vary whether students work individually or in small groups (or both), whether students respond to the question once or multiple times, and whether faculty summarize the correct answer or ask students to explain to the rest of the class why they selected their answers. Faculty can also vary their implementation in the type of question they ask, when they ask the question, and why they ask the question (i.e., they may have varying goals for asking any particular clicker question).

    Observations of STEM Courses

    Real-time observations of STEM courses have become a popular methodology for evaluating and describing the teaching techniques used in STEM courses. There are a number of existing observation protocols, including the Reformed Teaching Observation Protocol (RTOP; Sawada et al., 2002), the Classroom Observation Protocol for Undergraduate STEM (COPUS; Smith et al., 2013), the Teaching Dimensions Observation Protocol (TDOP; Hora, 2013), and the Observation Protocol for Active Learning (OPAL; Frey et al., 2016). While some of the tools are evaluative in nature (e.g., RTOP), others focus on documenting and describing what occurred during a class session (e.g., COPUS, TDOP, and OPAL). Some of these protocols focus only on the instructor (e.g., RTOP and TDOP), while others focus on both the instructor and students (e.g., COPUS and OPAL).

    Several studies demonstrate how observation data can be used to document and describe STEM teaching. A study by Lund and colleagues (2015) used COPUS and RTOP data to categorize a diverse set of STEM classrooms into 10 “profiles.” These 10 profiles aligned closely with past literature on teaching, with each of the 10 falling broadly under the umbrella terms of lecture, Socratic method, PI, or collaborative learning. Another example of how observation data can be used to describe teaching comes from work by Lewin and colleagues (2016) observing STEM courses using COPUS. They described three “modes” to how clickers were being used: clicker episodes that used only individual thinking and voting; episodes that used peer discussion; and alternative collaboration, in which a clicker question was not posed but students were allowed to discuss with peers. Finally, Hora’s (2015) work using TDOP observation data describes the multidimensional nature of active learning. Hora argues that we cannot rely on unidimensional descriptors of teaching or on simply determining whether active learning is present or not present and that we need to have a more nuanced understanding of the multidimensional nature of an undergraduate STEM classroom (Hora and Ferrare, 2014; Hora, 2015). For example, Hora notes the term “lecture” indicates an extended period of time in which the instructor is talking or delivering content. However, while “lecture” may be perceived as a unidimensional and even undesirable teaching strategy, it does not preclude that many effective teaching techniques could be occurring concurrently, including substantial student interaction. Our work is complementary to each of these studies, such that we aimed to describe STEM classrooms using observation data that would capture the multidimensional nature and the nuances of teaching. Specifically, the scope of our study is to better understand the nuances of one common teaching technique that is frequently used in a common type of class: clicker-based active learning in the high-enrollment, lecture-based, STEM course.

    We used the OPAL protocol to observe a number of high-enrollment STEM courses whose instructors were using clicker-based active learning. OPAL was developed from COPUS and TDOP; hence, it is similar in structure to these protocols (Frey et al., 2016). Like COPUS, OPAL captures both instructor and student behaviors, which are marked in 2-minute intervals. Additionally, in the OPAL protocol, every question and answer posed during a 2-minute interval is tallied. Having the ability to count the number of questions and answers that occur during class can help to differentiate classrooms with a high amount of student interaction from classrooms with lower amounts. This is especially important in high-enrollment, lecture-based courses, in which students may not have many opportunities to interact with the instructor—we wanted to be sure that this student–instructor interaction was captured, as past literature has noted that the instructor asking questions during the lecture can be a sign that effective teaching strategies are being used (Schonwetter, 1993; Hora, 2015). Additionally, a method for developing a visual timeline of the observation data is available for OPAL (Frey et al., 2016; also see the Supplemental Material). Creation of the timeline also includes “segmenting” of the observation data. Segmenting the timeline provides a high-level overview of what occurred in the class session and how long each of the major teaching strategies that were used lasted. Segments are inserted into the timeline based on the codes marked. Examples of segments might be: lecture, interactive lecture, problem solving with group work, and demonstration (Frey et al., 2016). Importantly, other observation protocols, like COPUS or TDOP, could be used to generate similar timelines and would provide analogous data to OPAL. One additional reason we used OPAL in the current study was because many of the observations we conducted were also intended for use in non-research purposes (i.e., in teaching consultations). The specific codes contained in OPAL were used for these other faculty development endeavors.

    The Current Study

    Given the rising popularity of using clickers in undergraduate STEM courses and past work using observation data to describe classrooms, we sought to describe some of the many ways that faculty implement clicker-based active learning into their classrooms by observing the courses. In addition, we gathered interview data from faculty to understand why they opted to use clickers in these ways, and we provide more detail on how clickers were implemented in the courses we observed. The study was guided by the following research questions:

    1. How do faculty members from one university implement clicker-based active learning into high-enrollment, lecture-based STEM courses? Why do they use these implementations?

    2. How do faculty members situate clicker-based active learning into the rest of the lecture? Why do they situate clicker-based active learning in their courses in this way?

    METHOD

    Study Setting

    Data were collected at a midsized, private, selective research institution in the midwestern United States. In the Fall of 2013, the institution was awarded a grant from the Association of American Universities (AAU) Undergraduate STEM Education Initiative that aimed to increase the use of active learning and other EBIPs in lower-level STEM courses. As a part of the AAU project at the institution, a large number of student clickers were purchased and housed in the on-campus library for students to check out free of charge. Faculty could request that their students be able to use the student clickers available at the library and would then check out a faculty clicker base from the teaching center. This system allowed us to have an inventory of faculty and courses using clickers at the university.

    In addition to the purchasing of clicker equipment as a part of the AAU initiative, the teaching center began adding multiple professional development opportunities on how to implement clicker-based active learning. New workshops were developed and implemented, and clicker-based active learning was one focus of a 3-day summer teaching institute for STEM faculty. Additionally, the teaching center began a faculty learning community for faculty using clickers or interested in using clickers. The goals of the workshops and the learning community were to disseminate information on EBIPs and to help faculty become reflective teachers, both important components in moving undergraduate science faculty toward the use of more EBIPs (Henderson et al., 2012). More specifically, the goals were to bring awareness to faculty on the benefits of clicker-based active learning and to teach them effective strategies for implementation (i.e., PI; Mazur, 1997).

    Because it was easy for both faculty and students to obtain the clicker equipment and because there were multiple professional development teaching opportunities, we saw tremendous growth in the number of faculty choosing to use clicker-based active learning over the course of the AAU initiative at our institution, especially in the lower-level STEM courses. For example, there were four faculty who used clickers at the start of the program in Fall 2013, and 34 faculty using clickers at the end of the program in Spring 2017, most of whom were STEM faculty. Course enrollments in the first year of the program were 2180 students, and rose to 5949 students in the last year of the program (note that students could have been counted more than once in these numbers if they were enrolled in more than one course with clickers). Considering our institution has roughly a total of 7000 undergraduate students, we considered this to be considerable growth in the span of only a few years. For this study, we observed class sessions of a majority of these faculty members between the Fall 2014 and Spring 2016 semesters (i.e., the second and third years of the 4-year program) to investigate the similarities and differences in their clicker implementation styles.

    Study Design

    To investigate our research questions, we used a sequential explanatory design for the study (Creswell et al., 2003; Tashakkori and Teddlie, 2010; Warfa, 2016). This type of mixed-methods design consists of two data-collection phases: quantitative data are collected first, then qualitative data are collected. The purpose of following the quantitative data collection with qualitative data is to study the topic in more depth than a typical quantitative study would allow and to describe the narrative behind the quantitative results. In this study, we first collected quantitative observation data in phase 1 and then conducted qualitative interviews with faculty participants in phase 2. Institutional review board approval was gained to conduct the study, and all faculty participants consented to participate.

    Phase 1: Observations, Visual Analysis, and Categorization of Faculty

    Collection of Quantitative Observation Data.

    Participants (N = 22) were full-time faculty from a variety of STEM disciplines at a selective research institution in the midwestern United States. There were 12 tenured, four tenure-track, and six non–tenure track faculty from eight (of 11 total) STEM departments on campus (i.e., biology; biomedical engineering; chemistry; computer science and engineering; energy, environmental, and chemical engineering; mathematics; physics; and psychological and brain sciences). Participants were targeted for the study because they had requested the use of clickers available for checkout at the campus library for their students, had checked out a faculty clicker base from the teaching center, and were teaching large lecture-format STEM courses. Thus, we had a list of faculty known to be using clickers, and we invited them to participate in the research study. To our knowledge, these participants represented the majority of the STEM faculty implementing clickers at the institution at the time of data collection.

    We describe the courses as lecture-based for two reasons. First, the university describes these courses as lectures in official documents, and second, our data support the idea that the predominant teaching strategy used in every observation was lecture. However, it should be noted that many of the courses included additional components such as laboratories, recitations, or studio time, but only the lecture portion of the course was observed. The courses had enrollments ranging from 51 to 348 students (most were 120+ students; only two courses were in the 50–60 student range). Nearly all observations were of introductory-level courses—only 7% of the observations were from upper-division courses.

    All faculty members were observed between three and eight times during a given semester. Sixty-four percent of the 22 faculty participants (N = 14) were observed during multiple semesters. When this occurred, we observed the same topics each semester to control for variation in content that may be more or less amenable to active learning (Shulman, 1986; Gess-Newsome, 1999). Data were collected for two academic years, ranging from the Fall of 2014 through the Spring of 2016.

    The OPAL observation data from each observation (N = 183) were transformed into an OPAL timeline (see Frey et al., 2016). We separated the quantitative timeline data into segments corresponding to the teaching strategies used, which allowed for a more macrolevel overview of what strategies occurred during the observation, when these strategies occurred, and for how long each strategy lasted. Nearly all of the data from each observation were able to be sorted into one of four key teaching strategies of interest to our study: clicker activities, lecture, other active-learning activities, and demonstrations/videos. Data that could not be sorted into one of these four teaching strategies were labeled “not coded” and were often course announcements at the beginning or end of a class session (the “not coded” data amounted to only 3.38% of the class session on average). Because multiple OPAL codes could be selected in any 2-minute interval, segments would sometimes overlap across one interval. When segments overlapped, we divided the interval in half, attributing 1 minute to one segment and 1 minute to the other segment. When the segmenting technique was applied, the codes in each 2-minute interval were our guides for determining where one segment began and another ended (see Frey et al., 2016, and the Supplemental Material for guidance on how to segment). The OPAL codes that corresponded to each of the four segment types can be found in Table 1, samples of segmenting for each category and subcategory are provided in Figure 1, and additional information on how we applied the segmenting can be found in the Supplemental Material.

    TABLE 1. Description of typical OPAL codes in each segmenta

    Segment labelTypical OPAL codes in this segmentBrief code descriptions
    LectureLecLecture
    LpvLecture with premade visuals
    LHVLecture with handwritten visuals
    LIInteractive lecture
    Clicker activityPSbPose problem-solving activity on board
    PSvPose problem-solving activity verbally
    QGDiscuss question in groups
    IndThink/work individually
    VTVote with technology
    VHVote by show of hands
    SfuSummary follow-up
    DfuDiscussion follow-up
    Other active learningADVActive demonstration/video
    PSbPose problem-solving activity on board
    PSvPose problem-solving activity verbally
    QGDiscuss question in groups
    IndThink/work individually
    SfuSummary follow-up
    DfuDiscussion follow-up
    Demonstration/videoPDVPassive demonstration/video
    SfuSummary follow-up
    DfuDiscussion follow-up

    aThese are only examples of typical codes that were likely to be included in each segment type and are not the exhaustive list of all codes that may be contained in a segment. A full code list and detailed code descriptions can be found in the Supplemental Material in Frey et al. (2016).

    FIGURE 1.

    FIGURE 1. Sample segmenting from each category.

    Visual Analysis of Observation Data.

    Once all the timelines were generated and segmented, we conducted a visual analysis of the timelines for every faculty member, comparing observation timelines within and between faculty. Qualitative analysis methods, including visual analysis, are typically iterative in nature, meaning that the analyses are conducted in multiple steps before arriving at the final results (Merriam, 2009). Results are typically in the form of categories, sometimes subdivided into narrower subcategories, that are then described in detail with samples of data provided (Merriam, 2009).

    The visual analysis in this study began with one research team member (E.D.S.) visually inspecting every timeline and looking for similarities and differences in the implementation of clicker-based active learning. Clear patterns began to emerge based on the frequency, duration, timing, and consistency of the clicker activities used. Additionally, there were some noticeable differences in whether faculty had students work individually during a clicker activity or with their peers, and whether the faculty participant provided a summary follow-up or engaged the students in a discussion follow-up to the clicker activity. Other contributing factors to identifying patterns included the duration of the lecture portion(s) of the observation and frequency with which other active-learning strategies or demonstrations/videos were used. Finally, clear patterns emerged in a visual inspection of the number of questions and answers that occurred during the observation, with a few of the faculty members having consistent and frequent interactions with students through questioning.

    Categorization of Faculty Based on Observation Data.

    On the basis of the visual analysis, the initial research team member (E.D.S.) categorized 20 of the 22 faculty into an initial seven categories. Next, two additional research-team members (M.D.R. and R.F.F.), together with the initial team member (E.D.S.), visually inspected all timelines. The three team members viewed the timelines together, while discussing the patterns E.D.S. had identified. M.D.R. and R.F.F. agreed with the initial categories drafted by E.D.S., with some minor modifications. These modifications included combining three of the initial categories into one category with three subcategories (which became category 1 and its subcategories 1A, 1B, and 1C), and combining two other initial categories into one category with two subcategories (which became category 4 and its subcategories 4A and 4B). This created a final category scheme consisting of four main categories (categories 1, 2, 3, and 4), in which two categories consisted of subcategories (categories 1 and 4). Finally, the three team members discussed how to categorize the remaining two faculty members, deciding by consensus that one belonged to an existing subcategory (4A), while the other should be assigned a unique subcategory (1D). The final categories and subcategories are listed in Table 2, and sample segmented timelines can be found in Figure 1.

    TABLE 2: Clicker implementation category descriptionsa

    Category/subcategoryDescriptionN observationsN faculty
    Category 1: Lecture with clicker activities
    • Clicker activities are the main (or only) active-learning component.

    • Subcategories gradually increase in the number, consistency, and amount of clicker activities and interaction with students.

    1ALimited clicker, no other interaction
    • Typically one clicker activity (or none) implemented per class session.

    • Virtually no interaction with students occurs outside clicker activities.

    203b
    1BRegular clicker, limited interaction
    • Clicker activities are slightly more frequent and consistently used than in 1A (one to two clicker activities per class session).

    • Some interaction with students occurs throughout class outside clicker activities.

    224b
    1CRegular clicker, medium interaction
    • Clicker activities are slightly more frequent and consistent than 1B (typically two clicker activities per class session).

    • Some demonstrations are used, but inconsistently (on select dates/topics).

    • Inconsistent usage of interaction; some observations contain substantial interaction with students, while other observations have very little.

    262
    1DRegular clicker, high interaction
    • Clicker activities are just as frequent and consistent as in 1B and 1C (typically two clicker activities per class session).

    • Some demonstrations are used, but inconsistently (on select dates/topics).

    • All observations contain a consistently high level of interaction throughout.

    51
    Category 2: Lecture with limited clicker, but regular demonstrations
    • Clicker activities are not used frequently or even during every observation.

    • All observations contain a substantial demonstration/video component.

    93
    Category 3: Lecture opens with clicker activities
    • All clicker activities are lengthy (roughly one-third of class time) and always occur at the very beginning of class.

    41
    Category 4: Lecture with clickers and mix of other active learning
    • Faculty intersperse a variety of activities into the course; clicker activities are only one type of activity used regularly.

    • Substantial amount of demonstrations are used, and some faculty incorporate other active-learning activities.

    • Most faculty have some interaction with students, with some having a very substantial amount.

    4ARegular clickers and active learning, medium interaction
    • Other active learning and some interaction occur, but inconsistently.

    657
    4BRegular clickers and active learning, high interaction
    • All observations contain a consistently high level of interaction throughout the class session.

    323

    a“Clicker activities” included all clicker questions that occurred in sequence, rather than individual clicker questions. For example, if a faculty participant posed three clicker questions in sequence, with no other teaching techniques in between the clicker questions, all three clicker questions would be lumped into one “clicker activity” likely lasting for several 2-minute intervals.

    bIndicates that one faculty participant in this category was categorized into a different category in other semesters. This is why the total number of faculty participants adds up to more than the 22 in the study sample.

    To verify that the visual analysis yielded meaningful categories, we also conducted statistical analyses. Specifically, the Supplemental Material contains analyses confirming that these categories and subcategories are significantly different from one another in ways that directly aligned with the visual analysis. We found that all categories significantly differed in the amount of time spent on clicker activities, with category 3 spending the most amount of time and category 1 spending the least amount of time on clicker activities (all p < 0.05). Additionally, category 2 faculty spent more time than faculty in any other category on demonstrations/videos (all p < 0.01). Finally, instructors in subcategories 1D and 4B asked significantly more questions than faculty in other categories (all p < 0.001), confirming that these faculty had significantly more interaction with students than other faculty, which was suggested by the visual analysis.

    Phase 2: Qualitative Interviews

    After conducting the visual analysis and categorizing the faculty (and confirming the distinctness of the categories statistically), we sought to gain more insight into the “why” portions of our research questions by conducting interviews with some of the faculty. Interviewing, like other qualitative research methods, does not necessarily aim to gather as large a sample as possible of any population of interest. Rather, qualitative methods such as interviewing aim to conduct purposeful sampling, meaning that we should “select a sample from which the most can be learned” (Merriam, 2009, p. 77; Suri, 2011). The purpose of interviewing faculty in this study is to better understand why and how an instructor might implement clicker-based active learning into a course. The purpose is not to understand every reason why an instructor might use clickers or every way an instructor might use them. Thus, we selected instructors in each of the categories or subcategories from whom we anticipated we could learn the most.

    Our primary reason for selecting faculty for interviewing was to better understand their reasoning for why they implemented clicker-based active learning and how they decided how to situate it within their classes. We sought interesting stories that would provide a rich understanding of that faculty participant and his or her teaching style. To have a diverse set of stories, we selected at least one faculty participant from each category or subcategory to be interviewed, which would result in a minimum of eight faculty to be interviewed. We also aimed to get a wide variety of stories by selecting interviewees who were diverse in terms of gender, discipline, and tenure status. Multiple faculty members were selected from a category or subcategory if we thought they would provide substantively different stories regarding their clicker use. Thus, we interviewed a wide range of faculty, but not all the observed faculty (which is common in qualitative studies; e.g., Dye and Stanton, 2017)

    Out of the 22 faculty who were observed, three faculty who were originally observed had left the university or retired between the time of our observations and interviews, so we were unable to request interviews from them. Of the remaining 19 faculty, 11 faculty (50% of the observed faculty) were invited to participate in the interviews, and 10 agreed (one faculty member did not respond to our interview requests). The 10 interview participants were from five of the eight departments in the observation sample (biology; chemistry; mathematics; physics; and psychological and brain sciences). Five faculty participants interviewed were men and five were women. Four were tenured, three were tenure-track, and three were full-time non–tenure track. All but one taught courses with 120+ students (one instructor’s course had 51 students). We developed interview questions based on our research questions and information learned from the visual analysis and categorization. We developed a set of 10 interview questions that were standard across all the interviews (see the Supplemental Material for the full list of interview questions). We pilot tested the interview questions with three participants and did not make any changes to the questions after the pilot testing. Interviews were semistructured in the sense that we had a planned set of questions for each faculty participant, however, depending on the responses, additional questions were asked whether further elaboration was needed or whether something novel or especially noteworthy was expressed that was not in the planned interview questions (Merriam, 2009). Interviews lasted roughly 30 minutes per participant and were audio recorded. Any quotes from interviews that are presented here may have been lightly edited, and we have indicated any instances where editing occurred. For example, we used ellipses to indicate when words were removed from the quote for brevity or to improve clarity and brackets to insert words being referred to but not actually uttered by the interviewee at that moment.

    RESULTS

    First, we will describe the results of the visual analysis and categorization of faculty, then we will describe the results for our two research questions.

    Visual Analysis Results

    Through the visual analysis of the timeline data, we identified four patterns we used to categorize faculty. We labeled these four categories: category 1, lecture with clicker activities; category 2, lecture with limited clicker activities, but regular demonstrations; category 3, lecture opens with clicker activities; and category 4, lecture with clicker and a mix of other active learning. We additionally categorized all faculty categorized into categories 1 and 4 into further subcategories to differentiate between the more subtle, but potentially important, differences in the implementation styles of those faculty. A sample timeline from each category and subcategory can be found in the Supplemental Material, while sample segmenting from a timeline in each category and subcategory can be found in Figure 1.

    The ordering of the categories roughly aligned with the amount and variety of active learning used by faculty in each category (e.g., Figure 1). Importantly, the categories were exhaustive, such that every faculty participant was able to be categorized into one of the four categories, and no faculty were unable to be categorized. The categories were also nearly mutually exclusive, with all but two faculty members being categorized into the same category every semester we observed them. The two who switched categories in later semesters included one participant who was categorized into subcategory 1A for the first semester in which he/she was observed and into subcategory 1B the second semester he/she was observed. The other participant was first categorized into subcategory 1B in the first semester he/she was observed and into subcategory 4A the latter two semesters he/she was observed. See Table 2 for descriptions of each category/subcategory and the number of faculty and observations in each.

    Research Question 1: In What Ways Do Faculty Implement Clicker-Based Active Learning and Why?

    Generally, we found that the faculty tended to use similar procedures when implementing clicker activities, with variation occurring in whether they asked students to work individually or in groups, and whether the faculty participant provided a summary follow-up to the clicker activity or engaged the students in a whole-class discussion to wrap up the clicker activity. To examine these variations, we examined the percent of clicker activities in each category and subcategory that used individual versus group work and summary versus discussion follow-up.

    Students Working Individually versus Students Engaging in Peer Discussion.

    In terms of implementation styles, we found that 91% of clicker questions were implemented using peer discussion and 9% used individual thinking and responding. However, this varied depending on the category. Faculty in categories 2 and 3 used more individual thinking, while faculty in categories 1 and 4 (which also comprised the majority of the observations) used more peer discussion (see Figure 2). To be clear, the instances of individual thinking and responding refer to instances in which only individual thinking and responding was used and was not followed by a peer discussion and revote, which is the procedure in the typical PI method.

    FIGURE 2.

    FIGURE 2. Percent of clicker activities in each category that utilized individual thinking vs. peer discussion.

    Faculty Providing Summary Follow-Up versus Engaging Students in Discussion Follow-Up.

    Faculty also varied in the amount and consistency with which they tended to use a summary follow-up after the voting as compared with a discussion follow-up (see Figure 3). Overall, 32% of clicker questions were implemented with summary follow-up and 68% with discussion follow-up. But again, results varied depending on the category. Specifically, faculty in category 1 varied in whether they used summary or discussion follow-up, with levels of discussion follow-up increasing from subcategories 1A to 1D. Specifically, faculty in 1A and 1B tended to use more summary follow-up, while faculty in 1C and 1D tended to use more discussion follow-up. Faculty in category 2 tended to use summary follow-up the majority of the time. In categories 3 and 4, the majority of the follow-up was discussion, with the faculty participant in category 3 in particular using only discussion follow-up for clicker activities. These data suggest that there was considerable variability in how frequently summary and discussion follow-up were used, even within categories and subcategories.

    FIGURE 3.

    FIGURE 3. Percent of clicker activities in each category that utilized summary follow-up vs. a whole-class discussion follow-up.

    After examining the similarities and differences in implementation styles, we conducted interviews with select faculty to better understand why they used these implementation styles. In particular, we asked in more detail 1) why they used peer discussion, and 2) why they used discussion follow-up.

    Why Did Faculty Use These Implementation Styles?

    Regarding why faculty used these procedures, all faculty members we interviewed provided detailed accounts of the procedures they used when implementing a clicker activity, and the descriptions were relatively similar, except for the differences we noted based on the quantitative data.

    Generally, when a clicker question was presented, faculty read the question first, then provided time for students to either think about the question on their own (i.e., the “Ind” code) or discuss the question with their peers sitting near them (i.e., the “QG” code). Once students had been thinking or discussing the question for a short time, the faculty member would open the polling and students would register their individual responses using their clickers. After closing the poll, faculty generally showed the histogram of responses. Next, some faculty described providing a detailed explanation of the correct answer (e.g., summary follow-up), while others described engaging the whole class by asking individual students to explain why they responded with a certain answer (e.g., discussion follow-up). After the whole-class discussion, those faculty would generally also provide a summary explanation of the correct answer. The following is an example of a psychology faculty participant from category 2 describing his/her procedure:

    I first introduce it and say, “Ok, let’s think back to what we talked about yesterday” or something like that. Or “let’s do a [clicker] question here.” Then I show the question with all alternatives already up there. And then, I’ll usually say to them, ‘Think about this … talk to your neighbors about what you think.” And I don’t usually let it go very long … maybe a minute.… Then they’ll vote.… The next slide then is the correct answer bolded. Then I’ll [ask] … “Why is that the correct answer?”… “Why are the others not correct?” And then have [them] tell me why that’s the case.

    Faculty generally indicated that they chose to implement the clicker activities using these procedures because they had either learned it from colleagues, had learned it from professional development workshops, or had learned it after a consultation with a faculty developer from the teaching center. For example, a physics faculty participant from subcategory 4B indicated that he/she “learned the basic framework from others who teach the class.” A psychology faculty participant from subcategory 4A named the many professional development activities he/she had attended at the institution as the reason for using this implementation style.

    I feel like I learned [from] going to the workshops … and the [learning] community meetings where everybody using clickers came and talked about it. And then, I think also the two- or three-day summer teaching workshop … all of those were helpful.

    Finally, more than one faculty participant indicated that using this procedure simply “seemed logical” (from a chemistry participant in subcategory 1A) or “seemed like a good way to do it” (from a psychology participant in category 2). These responses suggest that some faculty used their own reasoning to determine how to implement clicker-based active learning.

    Why Peer Discussion?

    We asked faculty participants why they chose to have students engage in peer discussion as a part of their clicker activity procedures. Faculty gave a variety of responses, including 1) clicker activities help with attention, 2) peer discussion seemed like an important component of active learning, and 3) peer discussion is perhaps where the learning is occurring in the activity. In particular, a number of faculty indicated that the attention and engagement that peer discussion prompted was important. For example, a physics faculty participant from subcategory 4A said that

    It breaks up the class a bit, so it wakes some people up when they get to talk to their neighbors a bit … breaks up the monotony of the lecture.

    Another physics faculty participant from subcategory 4B stated, “Having them give their thoughts always seemed very important,” while a psychology participant from category 2 indicated that “that’s where the testing of their knowledge and the learning part comes in.” Additionally, one physics participant from subcategory 4B indicated a novel reason for using peer discussion:

    Having them [discuss] makes them engage and resets their attention. Helps them practice using scientific concepts and scientific language, which is something that is not obvious or intuitive, especially if you don’t do it much. It makes them more comfortable with what these terms are and how we use them.

    Given these responses, it seems that faculty use peer discussion as a means of engaging (or re-engaging) the students during lecture and that, while they cannot always articulate why peer discussion is beneficial, they recognize its importance. Also, at least one faculty member identified the skill building that can be accomplished with peer discussion via practicing using scientific language.

    Why Discussion Follow-Up?

    Finally, we asked faculty participants why they felt it was important to engage the whole class in a discussion follow-up after students had responded to the question with their clickers. A chemistry faculty participant from subcategory 1D indicated that helping students understand why the correct answer was correct, but also why the other response options were incorrect was important:

    [After closing the poll and showing the histogram] … Then we talk about why one is the answer and [I ask them] “Why did you say that as the answer?” We talk about that. Then I typically have one [response option] that I think people will get confused about and I’ll say, “Well how come you didn’t pick [that answer?]” and then we’ll talk about why they didn’t pick [that answer]. Or if someone picked [that answer] why did they choose it? I think it’s really important to discuss not only why an answer is correct but why some of the other answers are not correct.

    Faculty who engaged students in a discussion follow-up also described ending the discussion with a clear explanation on what the correct answer was and why it was the correct answer. A physics faculty participant from subcategory 4B indicated,

    [In the past] sometimes students said … they didn’t know what the final answer was … So [now] I make sure everything is clear by writing key points or key equations on the board, making sure everyone sees that and making sure that I make a clear summary at the end since sometimes it can be hard to follow what the student says [in the whole-class discussion].

    Given these responses, it seems that faculty use discussion follow-up for a few reasons, including helping students use their reasoning skills to identify why answers are correct (or incorrect) and to ensure that students know which answer is correct and why.

    Summary of Research Question 1 Findings.

    Regarding research question 1, we found that faculty participants decided to primarily use peer discussion or individual thinking during the clicker activities. However, faculty did not always use the same follow-up procedure—they varied whether they followed the clicker activity with a summary or a discussion-style follow-up. Faculty expressed that using peer discussion was important because it re-engaged students and reset attention, and because they believed it was important and was the step in the overall procedure in which learning was occurring. They expressed that discussion follow-up was a way to help students reason and understand which answer was correct and why it was correct.

    Research Question 2: How Do Faculty Situate Clicker-Based Active Learning into Class and Why?

    Generally, we found there was substantial variation in how faculty situated clicker activities in their class sessions. To explore these variations, we examined the average amount of time spent on each teaching activity, when clicker activities tended to occur within a class session, and how much interaction occurred during the class sessions.

    Average Amount of Time Spent on Each Teaching Strategy.

    We first conducted descriptive analyses on the percent of time spent on the four activities we segmented in the timelines: clicker activities, lecture, demonstrations, and other active learning (see Figure 4). Any activity outside of these four segment types was labeled as “not coded” (which, as previously mentioned, was only 3.38% of class time on average). As noted earlier, all the observations were of lecture-based courses. Therefore, it was not surprising to find that lecture was the teaching strategy faculty spent the most time using. Across all categories, faculty spent an average of 69% of class time on lecture. With the exception of category 2, the teaching strategy faculty used most often after lecture was clicker activities at 19% of the time overall. Six percent of class time overall was spent on demonstrations and videos, and 3% was spent on other active-learning activities.

    FIGURE 4.

    FIGURE 4. Average percent of time spent on each teaching strategy.

    When Do Clicker Activities Occur within Class Sessions?

    In addition to how much time was spent on clicker activities, we examined whether clicker activities were interspersed during the class session. We recorded the time point at which every clicker activity began in each observation—the first 10 minutes of class, the middle of class, or the last 10 minutes of class. Using longer intervals (i.e., first 15 minutes, middle, and last 15 minutes of class) did not produce meaningfully different results. We categorized an observation as having interspersed clicker activities if the observation had any clicker activities that began in the middle of class (i.e., not in the first 10 minutes, nor the last 10 minutes). Generally, we found that clicker activities were interspersed in the majority of the observations (>60%) in subcategories 1B, 4A, and 4B. Clicker activities were interspersed during roughly 40–50% of the observations in subcategories 1A, 1C, 1D, and 2. Finally, however, clicker activities were never interspersed in category 3. This was because the faculty member began every clicker activity at the very beginning of class, and the clicker activity lasted for about one-third of the class session. This was the defining feature of category 3, and these data clearly show how this category differs from the others in this way (see Figures 1 and 5).

    FIGURE 5.

    FIGURE 5. Percentage of observations in which clicker activities were interspersed throughout the class session.

    Level of Interaction.

    We also examined the level of interaction between students and the faculty participant via how many questions and answers occurred in the observations. Across all categories, faculty asked an average of 15.56 questions in a 50-minute class session. However, during our visual analysis of the observations, we noticed that a select few faculty members used very high levels of interaction with the students. They asked the students questions consistently throughout a class session, regardless of the teaching strategy being used (e.g., lecture, clicker activities, demonstrations, or other active learning; see Figure 6 and the Supplemental Material). Specifically, faculty in subcategories 1D and 4B asked an average of 40.25 questions per class session, compared with 9.31 questions for faculty in all other categories/subcategories. For the faculty in subcategories 1D and 4B, clicker activities were not the only point of contact with students, and there were many ways in which these faculty interacted with students. In contrast, for other faculty, like those in subcategory 1A, clickers were the sole method of interacting with students during class time, given that on average less than one question per class session was posed by the instructor.

    FIGURE 6.

    FIGURE 6. Average number of faculty and student questions and answers per observation.

    Why Did Faculty Choose to Situate Clicker Activities in These Ways?

    Generally, faculty described a variety of reasons in their interviews for why they chose to situate clicker-based active learning in specific ways. The primary reasons were 1) they had different types of questions that they would pose for different purposes, 2) consistency and timing were important, 3) they wanted to break up the class session, and 4) they chose one of the four teaching strategies (i.e., lecture, clicker activity, demonstration/video, or other active learning) depending on the topic being discussed that day. Finally, instructors with high levels of interaction described how their use of clicker-based active learning related to their use of questioning.

    Different Questions with Different Purposes.

    Eight of the interviewed faculty indicated they had different types of clicker questions that each served different purposes. Some examples of question types included quantitative, calculation, conceptual, application, review, recall, prediction, introduction to a topic, practice solving problems, feedback for the faculty member on student understanding, and addressing misconceptions. For example, a biology faculty participant in subcategory 1B indicated he/she had multiple types of questions:

    Some of the questions are simply review, either from something we talked about last lecture or something they should have read in the reading. [The purpose is to] just to kind of warm them up, get them thinking. Sometimes [the question] is for me to assess did they get the concept? Was there some confusion?… The third type is [for practice. I’ll say] “I want you to practice it yourself … Now, draw a peptide bond [in your notes]. I’ve got four different examples up here, which is right?… Does your drawing match any of these?”

    Consistency.

    Five of the faculty interviewed indicated that consistency (i.e., using clicker activities every class session, and at the same time points in the class session) were important. For example, a chemistry faculty participant in subcategory 1C described that quantity, timing, and types of questions were all important aspects for how clicker activities were situated in his/her class sessions. This participant always aimed for “review” type question(s) toward the start of class that were straightforward and mostly recall type questions. Then, he/she interspersed a clicker activity during the middle of the lecture period to either 1) introduce an idea and possibly a common misconception about that idea; 2) have students work through an example that normally the faculty member would have worked through during class; or, most commonly, 3) apply content learned from lecture that day:

    After we had presented a topic, we would give them some type of application question. They [either] had to draw a conclusion, … think about the material in some kind of critical way, link to key concepts, [or use] some type of higher level thinking …

    Interestingly, this faculty participant stated he/she had learned from training on another type of active-learning method, process-oriented guided inquiry learning (POGIL; Moog, 2014), that consistency of implementation is important:

    One of the key pillars [of POGIL] is … to be consistent. Whatever you decide to do … if you are going to implement POGIL every day, once a week, before a test, whatever it is, communicate that clearly, and really stick to it. It’s really important from a student-learning standpoint. In terms of when you implement something new, there’s got to be that “buy-in” period.

    This participant clearly applied their knowledge of how to implement one type of active learning they were already familiar with, to their use of clicker-based active learning.

    To Break Up the Class Session.

    Four of the faculty interviewed indicated that they situated the clicker activities in their lectures in a certain way to simply break up the class session. For example, a chemistry faculty participant from subcategory 1A indicated that “putting [clicker activities] in the middle of class gives students a break.” Similarly, a physics faculty participant from subcategory 4A said:

    I’m not trying to jam pack [the class session] with active learning … 50 minutes is a limited amount of time to get things done. I think the main thing is to make a mix. By the time you have one or two clicker questions and a demonstration, you’ve already broken things up enough.

    Choosing the Best Teaching Strategy.

    Similar to the previous finding in which faculty used different types of clicker questions for specific purposes, faculty who used a variety of teaching strategies indicated that they would select the best teaching strategy for each topic or that certain strategies were better for different purposes. For example, a physics faculty participant from subcategory 4B indicated how he/she decided whether to use a clicker activity or a demonstration to teach a topic:

    Sometimes [the topic is] something the students can predict [using a clicker activity]. Sometimes it’s easier to show them [using a demonstration] and then have them talk to their neighbor: “why did it do this?”

    In contrast, some faculty participants indicated that clicker activities were better suited for certain purposes than other teaching strategies. The math faculty participant from category 3 indicated:

    I feel like I use each tool for what it’s better at [i.e., lecture and clicker activities]. For example, for reviewing material, in a nonclicker class I might do that by lecturing on the blackboard or doing open-ended questions, but sometimes I felt it wasn’t effective, so I used the clickers for that instead. Whereas I wouldn’t think clickers would be a good way to communicate new information.

    Moreover, a psychology faculty participant from category 2 discussed reasons for using demonstrations/videos versus clicker activities:

    The videos are usually on something really specific that you are demonstrating, whereas the [clicker] questions are sometimes maybe a little bit broader. For example, I like to ask clicker questions that cover a lot of material. I think they serve different purposes.

    Additionally, some faculty described combining teaching strategies. For example, a physics faculty participant from subcategory 4A indicated that he/she sometimes paired demonstrations with clicker activities:

    Sometimes I use demos with clickers … instead of telling them everything about it, I would say something about what’s going to happen [in the demonstration] and then I ask [them to make a prediction], which I use as a clicker question. And then, of course, we can say, “Let’s see.”

    In summary, there did not seem to be one consistent reason faculty chose to situate clicker activities into their courses in the ways that they did. They situated the activities in their class sessions very differently and for very different reasons.

    Using High Levels of Interaction.

    Finally, faculty in subcategories 1D and 4B tended to use very high levels of interaction throughout class. Faculty from both categories described how their clicker use related to their high level of interaction, with one faculty participant indicating that the high use of questioning predated clicker use (the chemistry participant in subcategory 1D), while the other described that using clicker activities with a discussion follow-up component got him/her more comfortable with asking questions during class (a physics participant from subcategory 4B). The subcategory 1D participant indicated that comfort was an important factor in the high level of interaction: “I really feel more comfortable talking to 200–300 people if they’re talking back to me.”

    In contrast, the faculty participant from subcategory 4B described that using clickers for several years has made him/her more comfortable with the ambiguity of not knowing how students will respond or what kinds of explanations they will have. This comfort led to a gradual increase in use of questioning:

    It’s definitely something I’ve grown into … [and using clickers] was sort of a jumping-off point. That used to be the one place that [students] would talk about things … Having done more clicker questions, you get more comfortable getting students to discuss things, then [I was able to] to use those skills that I had practiced in this one isolated incident in other places … I also think it makes them a lot more willing to ask questions. So I’ve noticed that as I’ve done this more, [students] will stop me more and say, “I don’t understand this” or “Why did we do this?” or “Would you do that step again?” a lot more than they used to … It seems to make them a lot more willing to interact with me …

    Summary of Research Question 2 Findings.

    Regarding the findings for research question 2, we found that the most time was spent on lecture, followed by clicker activities. We also found that many faculty interspersed clicker questions throughout class depending on the category or subcategory. However, category 3 never interspersed questions, because the faculty member always implemented a lengthy clicker activity at the start of class in order to review material, which was the defining feature of this category. Faculty indicated a variety of reasons for their choices on when to use clicker-based active learning, including 1) using different types of questions (placed at specific time points in the class session), 2) interspersing clicker activities to add consistency and structure to the course, 3) breaking up the class session, and 4) choosing the best teaching strategy for the concept regardless of when the teaching strategy would be used during the class session. Findings for research question 2 also suggest that a few instructors tend to have high levels of interaction with students independent of what teaching strategy is being used at the time. This finding suggests that, in addition to the more formal teaching strategies we segmented out of the data (i.e., clicker activities, lecture, demonstrations, and other active learning), less formal strategies like questioning and answering are also taking place. In contrast to instructors who plan to ask certain questions during their lecture, these instructors seemed to be asking questions more “off the cuff” as a way to walk students through problem solving or other content instead of simply transmitting content via lecture. In summary, faculty situate clicker activities into their class sessions in various ways for a range of reasons based on their goals for using clicker-based active learning.

    DISCUSSION

    Using a sequential explanatory mixed-methods design, we observed 183 high-enrollment, lecture-based STEM class sessions and conducted interviews with 10 of the 22 faculty who were observed in order to better understand how and why STEM faculty implement clicker-based active learning in their classrooms. We found that there were a number of similarities across faculty participants, but also some important differences. We have grouped our discussion of the results into two main findings that relate to our two research questions, and two additional findings we believe are an important part of the research, even though they are beyond the scope of the initial questions.

    Finding 1 (Research Question 1): Faculty Use Similar Procedures When Posing a Clicker Question and These Were Similar to Many of the PI Steps

    In regard to our first research question, we found that faculty participants largely used similar approaches in terms of the procedures they used when implementing clicker activities in their class sessions. They tended to only differ on whether they asked students to work individually or in groups to answer the question posed, and differed on whether they provided a summary follow-up to the question or whether they engaged the students in a whole-class discussion, possibly followed by a summary as a follow-up to the question. This finding is somewhat in contrast with past literature, which found substantive variation in instructor implementation of clicker-based active learning even when the same general technique was being implemented (e.g. PI; Mazur, 1997; Turpen and Finkelstein, 2009; Turpen et al., 2010)

    The approach used by the faculty in the study shared many of the procedures of PI, including steps 1 (instructor poses question), 5 (students provide their response, often with a clicker), 6 (the instructor tallies the responses), and 7 (the instructor explains the correct answer; Mazur, 1997; Stains and Vickrey, 2017). The main differences between the approach taken by the faculty in this study and PI approach is that the faculty in this study did not have students vote twice on the same question. The faculty in this study either tended to have students think about the question individually and vote (i.e., categories 2 and 3) or had students discuss the question with peers and then vote (without first having an individual vote; i.e., categories 1 and 4). Therefore, only one vote was taken in nearly every instance, and the histogram of responses was shown immediately after the vote but was not used to make subsequent voting decisions (although in at least one case, a faculty member reported using the histogram to determine how much time to spend on the discussion follow-up).

    This lack of first voting individually is particularly important for the faculty using only peer discussion, because it changes the meaning of later PI steps. Specifically, when students discussed the question with their peers, they were not trying to convince their peers of the correctness of their answers and would not necessarily have had the opportunity to self-reflect on their initial responses. Even though this changes the nature of the peer discussion, this finding is similar to past literature suggesting that faculty often skip the individual thinking and voting steps, even though they are important components of PI (Turpen and Finkelstein, 2009; Vickrey et al., 2015; Stains and Vickrey, 2017; Dancy et al., 2016). Additionally, the fact that some faculty in the study used only individual thinking and some used only peer discussion seems similar to Lewin and colleagues (2016) “modes” for how faculty used clickers: clicker episodes that only used individual thinking and voting, episodes that used peer discussion, and episodes of other active learning (Lewin et al., 2016).

    Another deviation from the PI methodology was that, instead of faculty implementing the final PI step of providing a summary explanation to the students, faculty sometimes chose to have a whole-class discussion followed by a summary explanation as a way of wrapping up the clicker activity. This variation is similar to that found by Turpen and Finkelstein (2007), who suggested that novice PI implementers tended to either never or always use discussion follow-up, and it was only the more experienced implementers who varied whether they used summary or discussion follow-up. In the current study, faculty used discussion follow-up to ensure students had used reasoning to arrive at their answers. For example, many of the faculty interviewed discussed starting a whole-class discussion by asking students to explain their reasoning—why the correct answer was correct, or why an alternative answer was not correct. Past literature indicates the importance of having students practice their reasoning and sense-making skills in a clicker question context. For example, Turpen and Finkelstein (2010) found that courses using more faculty–student and student–student collaboration, both during the peer discussion and whole-class discussion portions of a clicker activity, facilitated more sense-making efforts from students. Additionally, Knight and colleagues (2013) found that asking students to focus on their reasoning (albeit during the peer discussion step) resulted in students using higher quality of reasoning to explain their answers. Given that faculty in the current study did not describe giving students much direction during the peer discussion portion, perhaps the faculty participants sought to ensure that students practiced their reasoning skills by having a few students explain their reasoning to the whole class. Another possibility is that a few of the faculty learned this technique from being trained on another type of active learning, POGIL (Moog, 2014). Given that one of the faculty participants we interviewed stated that he/she transferred one skill learned from POGIL to how he/she implemented clicker-based active learning, perhaps this participant (and others) also transferred this part of the approach as well.

    There are two likely reasons why faculty used roughly the same procedures when implementing clicker-based active learning. First, some faculty in the study indicated that they learned how to implement clicker activities from their colleagues. Past research supports that many faculty learn how to implement PI informally from their colleagues (Dancy et al., 2016); thus, if their colleagues were implementing clickers using a certain style, the colleague might also have adopted the same procedures. Second, as mentioned previously, the institution was a part of the AAU Undergraduate STEM Education Initiative, which allowed the teaching center to provide additional professional development opportunities and a learning community on clicker-based active learning for interested STEM faculty. The goals of these professional development events were to bring awareness about the benefits of clicker-based active learning and other EBIPs and to educate faculty on effective implementation strategies such as PI (Mazur, 1997). Many of the faculty in the study attended these professional development events and learning community meetings, which may be why implementation styles were similar: they were trained to implement clicker-based active learning in roughly the same way. In particular, this may explain why most faculty did not ask students to think and vote individually before engaging in peer discussion. In the professional development events and learning community meetings, when faculty voiced concerns about time constraints, skipping the individual thinking and voting steps was often provided as a potential adaptation. The faculty developers leading the sessions made this concession because peer discussion has slightly more support in the literature for producing learning gains than the individual vote (Stains and Vickrey, 2017).

    The influence of faculty professional development and learning communities on the study findings has implications for faculty professional development more broadly. Time constraints and perceived lack of flexibility in implementation are documented barriers to implementing PI (Henderson and Dancy, 2008; Turpen et al., 2016); however, fidelity of implementation is also an important consideration (Borrego et al., 2013; Stains and Vickrey, 2017). Given that an overarching goal of the AAU initiative at our institution was to increase the number of faculty using active learning, the faculty developers leading the professional development sessions opted to provide reasonable adaptations that retained the most critical components of PI in an effort to encourage faculty to adopt the strategy. Additionally, if we understand that faculty are learners in the broader STEM education reform process (Mulnix, 2016), it makes sense to continue educating faculty on best practices in teaching while allowing some flexibility in implementation for those unwilling or unable to implement EBIPs with high fidelity. Developing reflective teachers through faculty professional development and learning communities is a necessary step toward broader curricular change (Henderson et al., 2012).

    Finding 2 (Research Question 2): Faculty Vary on How and Why They Situate Clicker Activities into Their Class Sessions

    In regard to our second research question, we found that there was variability regarding how faculty situated the clicker activities in their class sessions. For some faculty, clicker activities were the main point in the class at which students interacted with the faculty member (categories/subcategories 1A, 1B, 1C, and 3, for example); for others, clicker activities were just one point of interaction (categories/subcategories 1D, 2, 4A, and 4B). Some used clickers very consistently, meaning nearly every class session and roughly at the same points during the class session (categories/subcategories 1B, 1C, 1D, and 3), while others used it seemingly depending on the topic at hand (categories/subcategories 2, 4A, and 4B).

    We believe that the reason for this variation was because faculty reported many reasons for situating clicker activities in their classes. Many of the reasons provided align with previous work, including that the faculty had different types of questions that served different purposes (Turpen et al., 2010), that consistency was important (Moog, 2014), that breaking up the class session was desirable (Turpen et al., 2016), and that they chose one of the four teaching strategies (i.e., lecture, clicker activity, demonstration/video, or other active learning) depending on the topic. In regard to selecting the best teaching strategy for the content, past literature supports that content can drive the type of teaching strategy faculty use, specifically that faculty use pedagogical content knowledge to present content in appropriate ways for learners (Shulman, 1986; Gess-Newsome, 1999; Park and Oliver, 2008).

    Given the variety of reasons reported for implementing clicker-based active learning, it seems reasonable to expect similar variety in how faculty situated the clicker activities. However, this finding is in contrast to finding 1. If faculty used similar implementation styles because they learned the clicker question approach from colleagues or from engaging in professional development activities through the teaching center, we might expect that how they situate clicker-based active learning in the classroom would also be similar. This was not the case. Each faculty participant had his or her own reasons and goals for using clicker-based active learning, and as a result, his or her own way of situating clicker-based active learning in the classrooms. While the exact reasons differed, there was one commonality: nearly all the faculty interviewed reported that at least one of their goals for using clicker-based active learning was to increase attention and engagement during the class session. They reported that this influenced their decision on when to pose a clicker question during a class session. For example, some faculty were intentionally consistent with when they interspersed clicker activities during lecture (i.e., categories/subcategories 1A, 1B, 1C, 1D, and 3), while others simply varied the teaching strategies used every class session to keep students engaged (i.e., categories/subcategories 2, 4A, and 4B). This engagement goal also seemed to influence what kind of question to ask at what time point. For example, a faculty participant from subcategory 1C stated that he/she aimed for a review question at the start of the class and a mixture of other question types for the middle-of-class question. The questions were interspersed at these intervals with the purpose of engaging students at the beginning of class and re-engaging students in the middle of lecture. Moreover, we found that a handful of faculty additionally used more informal strategies to engage students. Specifically, faculty in subcategories 1D and 4B appeared to use questioning as a teaching tool throughout their class sessions. The idea that a lecture-based class can be interactive in this way is similar to Hora’s (2015) suggestion that, while lecture may seem like a unidimensional teaching strategy, it does not preclude that other teaching strategies may be occurring concurrently.

    Additional Finding 1: Using Clicker Activities Prompted Unexpected Changes to Some Faculty’s Teaching

    Several faculty participants expressed during their interviews that using clicker-based active learning prompted them to unexpectedly change their teaching in some way. One faculty participant (from category 2) chose not to continue using the actual clicker devices in later courses due to technical issues experienced; however, this participant developed and implemented similar activities in other courses and implemented them without the use of the clicker technology. Another faculty participant (from subcategory 1C) indicated that having to write and implement clicker questions helped him/her be more thoughtful and purposeful regarding the timing of normal (i.e., nonclicker) questions during the class session. This faculty member explained that, before using clicker-based active learning, he/she may have asked questions during the class session, but the questions were not always well thought out. Now, after having to write and prepare clicker questions, he/she indicates in the lecture notes the exact question to ask students and then plans for the integration of the students’ most likely responses into the lecture.

    Additionally, two faculty participants from the high interaction subcategories 1D and 4B expressed that using clickers led them to increase the amount of questioning they used, and they described a particular style for how they asked so many questions. The faculty participant from subcategory 1D indicated that asking questions was part of his/her teaching style before using clickers; however, using clicker-based active learning led this faculty member to ask more questions and be more interactive. The faculty participant from subcategory 4B described how implementing clicker activities helped him/her get comfortable with asking questions to the extent that he/she asks questions throughout a class session instead of only during the clicker activities. Both of these faculty participants described a particular process for how they ask so many questions during their class sessions. Specifically, they described breaking down concepts into smaller parts and asking questions about individual parts or steps. For example, the faculty participant from subcategory 1D described asking students “What’s the next step?” when solving a problem on the chalkboard and explained that this leads to asking a lot of questions and provides a means to model problem solving. The faculty participant from subcategory 4B described his/her approach to breaking down concepts as “guided practice” and said, “Sometimes I will sort of backtrack and say ‘before we answer this [bigger question], let’s make sure we all understand this piece of it first. Let’s start with this, “Can someone tell me about what this means or how these things are related?”’ and then we’ll work our way toward actually answering the [bigger] question.” Providing guided practice for students is touted as a best practice for educators (Angelo, 1995). Additionally, the process these faculty described is conceptually similar to instructional scaffolding and sociocultural views on learning in which students’ participation in learning is emphasized (Vygotsky, 1978). Thus, the faculty participants using high levels of interaction are doing so in a meaningful context that is aimed at helping their students better understand complex scientific processes.

    Finally, after this study was conducted, we learned through informal communications that several faculty participants in this study had changed their teaching in another way. Three of the participants, who each taught an introductory chemistry course, had decided to include more active learning in their courses after the success of implementing the clicker-based active learning. Specifically, they decided to develop and implement roughly ten 5- to 10-minute videos of them teaching a topic using lecture. Students were required to view the videos before they attended the lecture on that topic, and then the faculty member would spend 5–10 minutes during that lecture engaging in (additional) clicker-based active learning. Essentially, the faculty wanted to remove some lecture time from the course and replace it with additional clicker activities.

    Whereas several faculty expressed that using clicker-based active learning changed their teaching in some way, research suggests that STEM faculty tend to see barriers to implementing active learning in their classrooms, such as lack of time, large class sizes, and a desire to cover large amounts of content (Faust and Paulson, 1998; Henderson and Dancy, 2007; Michael, 2007; Turpen et al., 2016). Arguably, clicker-based active learning is one of the easiest types of active learning to add to a lecture-based course. Keeping these three factors in mind, we believe that clicker-based active learning can perhaps be thought of as an entry point to curricular change and may lead to some faculty adding more interaction or active learning to their courses in the future.

    Additional Finding 2: Observation Timeline Data Show Nuances of Implementation Styles

    Our last finding is in regard to the validity and utility of collecting this type of classroom observation data. We have already discussed that we focused our efforts on a specific, yet common, class type: the high-enrollment, lecture-based STEM course. When we conducted the visual analysis of all the timelines, there were very clear similarities and differences between the faculty participants’ implementation styles and how they chose to situate clicker activities in class. Categorizing the faculty was very easy, and only two faculty participants required discussion regarding the category or subcategory to which they belonged. Additionally, statistical tests confirmed that our categories and subcategories significantly differed in the ways suggested by the visual analysis (see the Supplemental Material).

    However, while examining some of our descriptive analyses of the categories/subcategories, we noticed that there were not always clear differences between them. In other words, some of the descriptive data did not suggest that these categories were meaningfully different (e.g., see Figure 4 and 5), even though the visual timeline data showed clear differences and our statistical tests showed differences (see the Supplemental Material). We know from past research that observational data are important, because faculty often self-report using more active learning than observational data would suggest (Ebert-May et al., 2011). However, because of the uniqueness of this type of quantitative observation data (OPAL, COPUS, TDOP, etc.) and having the ability to visualize the data in the form of chronological timelines, we were able to define and describe clicker implementation styles being used in classrooms across campus. Other, more common types of data collected in educational settings, such as course evaluations or surveys, would not provide the level or complexity of detail that this type of quantitative observation data provided via the timelines. The idea that these types of observation data provide more complex and useful data than typical approaches has also been noted by past literature (e.g., Hora, 2015). We agree with this sentiment and would like to expand upon it by adding that analyzing the data visualizations that can be generated from these types of data helped to distinguish between subtle, but important, nuances of implementation styles. Given the complexities of quantifying what occurs in classrooms, we feel that the current study speaks to the validity and utility of continuing to conduct classroom observations and generate visual timelines as a part of our evaluations of teaching techniques in STEM.

    Implications for Faculty Practitioners

    The findings from the current study may be of value to faculty practitioners interested or currently implementing clicker-based active learning. For one, these findings demonstrate one common approach to implementing clicker activities (at least, at our institution), as well as many variations regarding when the clicker activities are situated in class sessions and how they are used with other teaching strategies. Additionally, this research provides faculty practitioners with insight from peers on why they chose these implementation styles—faculty may have similar goals for their courses. Beyond these applications, faculty practitioners may find it useful to review the OPAL timelines demonstrating the considerable variability in the structure of classrooms we observed, which also shared many traits (high-enrollment, lecture-based courses in STEM; see Figure 1 and the Supplemental Material). Seeing this variability may give faculty practitioners the confidence to implement EBIPs in a manner that fits their teaching style and course objectives. Anecdotally, we see at our institution that, as faculty observe colleagues’ classes in which clicker activities are being used, they feel more confident about how to situate clicker activities into their own courses based on their own course goals. Finally, this research demonstrates that learning about EBIPs and seeing their implementation by colleagues is a useful approach, although it may not always lead to an implementation that maintains the fidelity of the EBIP as it was originally researched (Borrego et al., 2013; Stains and Vickrey, 2017).

    CONCLUSION

    The results from this study add to the ever-growing body of literature on clicker-based active learning by describing how and why faculty implement clickers into a common course type: high-enrollment, lecture-based courses. This is an important endeavor, because faculty tend to see barriers to implementing active learning in large courses (Henderson and Dancy, 2007) and also tend to adapt clicker procedures described in past literature, rather than fully adopt them (Dancy et al., 2016). Future work should continue to examine how and why faculty use clicker-based active learning, in order to better understand how and why this flexible teaching strategy is being implemented by faculty. The more we know about how and why faculty use various teaching methods, the better researchers, faculty developers, and practitioners will be able to encourage STEM faculty toward curricular change and using EBIPs more effectively.

    ACKNOWLEDGMENTS

    This research was supported by grants from the AAU Undergraduate STEM Education Initiative and the Professional and Organizational Development (POD) Network. We thank the faculty members involved in this study for allowing their courses to be observed and for agreeing to be interviewed about their experiences using clicker-based active learning.

    REFERENCES

  • Angelo, T. A. (1995). Classroom assessment for critical thinking. Teaching of Psychology, 22(1), 6–7. Google Scholar
  • Borrego, M., Cutler, S., Prince, M., Henderson, C., & Froyd, J. E. (2013). Fidelity of implementation of research-based instructional strategies (RBIS) in engineering science courses. Journal of Engineering Education, 102(3), 394–425. Google Scholar
  • Caldwell, J. E. (2007). Clickers in the large classroom: Current research and best-practice tips. CBE—Life Sciences Education, 6(1), 9–20. LinkGoogle Scholar
  • Creswell, J. W., Plano Clark, V. L., Guttmann, M. L., & Hanson, W. E. (2003). Advanced mixed methods research designs. In Tashakkori, A.Teddlie, C. (Eds.), Handbook of mixed methods in social and behavioral research (pp. 209–240). Thousand Oaks, CA: Sage. Google Scholar
  • Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of experience and results. American Journal of Physics, 69(9), 970–977. Google Scholar
  • Dancy, M., Henderson, C., & Turpen, C. (2016). How faculty learn about and implement research-based instructional strategies: The case of peer instruction. Physical Review Physics Education Research 121 010110. Google Scholar
  • Dye, K. M., & Stanton, J. D. (2017). Metacognition in upper-division biology students: Awareness does not always lead to control. CBE—Life Sciences Education, 16(2), ar31. LinkGoogle Scholar
  • Ebert-May, D., Derting, T. L., Hodder, J., Momsen, J. L., Long, T. M., & Jardeleza, S. E. (2011). What we say is not what we do: Effective evaluation of faculty professional development programs. BioScience, 61(7), 550–558. Google Scholar
  • Faust, J. L., & Paulson, D. R. (1998). Active learning in the college classroom. Journal on Excellence in College Teaching, 9(2), 3–24. Google Scholar
  • Frey, R. F., Fisher, B. A., Solomon, E. D., Leonard, D. A., Mutambuki, J. M., Cohen, C. A., … Pondugula, S. (2016). A visual approach to helping instructors integrate, document, and refine active learning. Journal of College Science Teaching, 45(5), 20–26. Google Scholar
  • Gess-Newsome, J. (1999). Pedagogical content knowledge: An introduction and orientation. In Gess-Newsome, J.Lederman, N. G. (Eds.), PCK and science education (pp. 3–17). Kluwer Academic Publishers. Google Scholar
  • Han, J. H. (2014). Closing the missing links and opening the relationships among the factors: A literature review on the use of clicker technology using the 3P model. Educational Technology & Society, 17(4), 150–168. Google Scholar
  • Henderson, C., Beach, A., & Finkelstein, N. D. (2012). Four categories of change strategies for transforming undergraduate instruction. In Tynjälä, P.Stenström, M.-L.Saarnivaara, M. (Eds.), Transitions and transformations in learning and education. Dordrecht: Springer. Google Scholar
  • Henderson, C., & Dancy, M. H. (2007). Barriers to the use of research-based instructional strategies: The influence of both individual and situational characteristics. Physical Review Special Topics—Physics Education Research, 3(2), 1–14. Google Scholar
  • Henderson, C., & Dancy, M. H. (2008). Physics faculty and educational researchers: Divergent expectations as barriers to the diffusion of innovations. American Journal of Physics, 76(1), 79–91. Google Scholar
  • Henderson, C., & Dancy, M. H. (2009). Impact of physics education research on the teaching of introductory quantitative physics in the United States. Physical Review Special Topics—Physics Education Research, 51–9. Google Scholar
  • Hora, M. T. (2013). Exploring the use of the Teaching Dimensions Observation Protocol to develop fine-grained measures of interactive teaching in undergraduate science classrooms (Wisconsin Center for Education Working Paper 2013-6) Madison: University of Wisconsin–Madison.Retrieved May 9, 2018, from www.wcer.wisc.edu/publications/workingPapers/
papers.php. Google Scholar
  • Hora, M. T. (2015). Towards a descriptive science of teaching: How the Teaching Dimensions Observation Protocol illuminates the dynamic and multi-dimensional nature of active learning modalities in postsecondary classrooms. Science Education, 99(5), 783–818. Google Scholar
  • Hora, M. T., & Ferrare, J. J. (2014). Remeasuring postsecondary teaching: How singular categories of instruction obscure the multiple dimensions of classroom practice. Journal of College Science Teaching, 43(3), 36–41. Google Scholar
  • Knight, J. K., Wise, S. B., & Southard, K. M. (2013). Understanding clicker discussions: Student reasoning and the impact of instructional cues. CBE—Life Sciences Education, 12(4), 645–654. LinkGoogle Scholar
  • Lewin, J. D., Vinson, E. L., Stetzer, M. R., & Smith, M. K. (2016). A campus-wide investigation of clicker implementation: The status of peer discussion in STEM classes. CBE—Life Sciences Education, 15(1), ar6. LinkGoogle Scholar
  • Lopez, J. A., Love, C., & Watters, D. (2014). Clickers in biosciences: Do they improve academic performance. International Journal of Innovation in Science and Mathematics, 22(3), 26–41. Google Scholar
  • Lund, T. J., Pilarz, M., Velasco, J. B., Chakraverty, D., Rosploch, K., Undersander, M., & Stains, M. (2015). The best of both worlds: Building on the COPUS and RTOP observation protocols to easily and reliably measure various levels of reformed instructional practice. CBE—Life Sciences Education, 14(2), ar18. LinkGoogle Scholar
  • Mazur, E. (1997). Peer instruction: A user’s manual. Upper Saddle River, NJ: Prentice Hall. Google Scholar
  • Merriam, S. B. (2009). Qualitative research: A guide to design and implementation. San Franscisco, CA: Jossey-Bass. Google Scholar
  • Michael, J. (2007). Faculty perceptions about barriers to active learning. College Teaching, 55(2), 42–47. Google Scholar
  • Moog, R. (2014). Process oriented guided inquiry learning. In McDaniel, M. A.Frey, R. F.Fitzpatrick, S. M.Roediger, H. L. (Eds.), Integrating cognitive science with innovative teaching in STEM disciplines (pp. 147–166). St. Louis, MO: Washington University Libraries. Google Scholar
  • Mulnix, A. B. (2016). STEM faculty as learners in pedagogical reform and the role of research articles as professional development opportunities. CBE—Life Sciences Education, 15(4), es8. LinkGoogle Scholar
  • Park, S., & Oliver, J. S. (2008). Revisiting the conceptualisation of pedagogical content knowledge (PCK): PCK as a conceptual tool to understand teachers as professionals. Research in Science Education, 38(3), 261–284. Google Scholar
  • Sawada, D., Piburn, M. D., Judson, E., Turley, J., Falconer, K., Benford, R., & Bloom, I. (2002). Measuring reform practices in science and mathematics classrooms: The Reformed Teaching Observation Protocol. School Science and Mathematics, 102(6), 245–253. Google Scholar
  • Schonwetter, D. J. (1993). Attributes of effective lecturing in the college classroom. Canadian Journal of Higher Education, 23(2), 1–18. Google Scholar
  • Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14. Google Scholar
  • Smith, M. K., Jones, F. H., Gilbert, S. L., & Wieman, C. E. (2013). The Classroom Observation Protocol for Undergraduate STEM (COPUS): A new instrument to characterize university STEM classroom practices. CBE—Life Sciences Education, 12(4), 618–627. LinkGoogle Scholar
  • Smith, M. K., Wood, W. B., Adams, W. K., Wieman, C., Knight, J. K., Guild, N., & Su, T. T. (2009). Why peer discussion improves student performance on in-class concept questions. Science, 323(5910), 122–124. MedlineGoogle Scholar
  • Stains, M., & Vickrey, T. (2017). Fidelity of implementation: An overlooked yet critical construct to establish effectiveness of evidence-based instructional practices. CBE—Life Sciences Education, 16(1), rm1. LinkGoogle Scholar
  • Suri, H. (2011). Purposeful sampling in qualitative research synthesis. Qualitative Research Journal, 11((2)), 63–75. Google Scholar
  • Tashakkori, A., & Teddlie, C. (2010). SAGE handbook of mixed methods in social and behavioral research (2nd ed.). Thousand Oaks, CA: Sage. Google Scholar
  • Turpen, C., Dancy, M., & Henderson, C. (2010). Faculty perspectives on using peer instruction: A national study. In: AIP conference proceedings(Vol. 1289, No. 1, pp. 325–328). Google Scholar
  • Turpen, C., Dancy, M., & Henderson, C. (2016). Perceived affordances and constraints regarding instructors’ use of Peer Instruction: Implications for promoting instructional change. Physical Review Special Topics—Physics Education Research, 12(1), 010116. Google Scholar
  • Turpen, C., & Finkelstein, N. D. (2007). Understanding how physics faculty use peer instruction. In: AIP conference proceedings(Vol. 951, No. 1, pp. 204–207). Google Scholar
  • Turpen, C., & Finkelstein, N. D. (2009). Not all interactive engagement is the same: Variations in physics professors’ implementation of peer instruction. Physical Review Special Topics—Physics Education Research, 5(2), 020101. Google Scholar
  • Turpen, C., & Finkelstein, N. D. (2010). The construction of different classroom norms during peer instruction: Students perceive differences. Physical Review Special Topics—Physics Education Research, 6(2), 020123. Google Scholar
  • Vickrey, T., Rosploch, K., Rahmanian, R., Pilarz, M., & Stains, M. (2015). Research-based implementation of peer instruction: A literature review. CBE—Life Sciences Education, 14(1), es3. LinkGoogle Scholar
  • Vygotsky, L. S. (1978). Mind in society. Cambridge, MA: Harvard University Press. Google Scholar
  • Warfa, A. M. (2016). Mixed-methods design in biology education research: Approach and uses. CBE—Life Sciences Education, 15(4), rm5. doi: 10.1187/cbe.16-01-0022. LinkGoogle Scholar