The purpose of educational research is to better understand how people learn and improve student learning. Typically, this research asks what students think, such as “What is the percentage of students interested in taking courses online?” or assesses how a change in instruction affects learning, such as “Are online lectures are equally effective as face-to-face lectures?”. Your research methods will depend heavily on which of these two goals (or both) you are trying to accomplish and many other factors. If you have little experience with human subjects research or you just need a refresher, this primer is intended to help you select the appropriate methods for conducting educational research. To do this, we’ll go through six main sections: types of research questions, pre-tests vs. post-tests, non-experiments vs. experiments, types of measurement, analysis, and statistics.
With an onslaught of many new technologies and new uses of technology in education to provide alternative methods for instructing students, many educators were left wondering when it is appropriate to use technology for instruction in higher education. A slew of research suggests the circumstances under which technology improves, maintains, or even hurts learning outcomes, but many of these studies compare the new method of instruction to a “traditional” method in which a lecturer talks at students during class time, holds office hours, and provides little additional support. The problem with this type of comparison is that many other non-technological interventions are available to improve upon the “traditional” method, so while the technological method might be an improvement, it is not necessarily the best method.
Now that technological resources are more commonly used at universities to provide online instruction to on-campus students, educators are asking under what circumstances it is best to use technology and when it is best to rely on peer interactions and instructors. Research on successful uses of technology, peers, and instructors in education is abundant, but direct comparisons between these use cases are uncommon. This report analyzes the successful cases from this literature and inductively determines the strengths of these educational resources. Then this report integrates this information to predict how educational resources could be best applied in courses. Though a meta-analysis would be preferable, predictions about which resource is better than another for a specific function in education (e.g., whether technology or peers are better at providing constant, instant feedback) are all that can be supported without additional research. Pressing research questions on the effective use of these educational resources are also identified.
This paper describes a project where a MOOC (Massive Open Online Course) was developed in order to blend a Circuits and Electronics course taught to non-majors at Georgia Tech. The MOOC platform contains videos of all the course lectures, online homework, and quizzes. Over 400 students take this course on campus each term. Since these students were spread over eight to nine sections, consistency of coverage and of grading was a major motivation for inverting this course. Another major motivation for the course inversion was to be able to introduce hands-on activities into the classroom so that students can get small-scale laboratory experiences within a lecture-based course. A number of different assessment methods are on-going with this course.
This study provides an empirical analysis of using online technologies and team problem solving sessions to shift an undergraduate fluid mechanics course from a traditional lecture format to a collaborative learning environment. Students were from three consecutive semesters of the same course taught by the same professor. Two treatment groups (Flipped, FlippedPlus) used different combinations of online technologies (Tegrity, WileyPlus, NetTexts). These students solved the same problems (100 plus) in class working in teams of two using desktop whiteboard tablets while receiving “just-in-time” tutoring provided by the instructor and graduate assistants. Out of class, the treatment groups watched 72 video lectures (11 minutes, average) covering course topics and example problems being solved. The comparison group received three-50 minute weekly in-class lectures. Data included three midterms and a final exam. Results revealed that even though the students in the FlippedPlus class had an average GPA that was lowest for all groups, their average final exam score was highest for all groups followed by the Flipped class students. While the results hold promise, additional research is needed to support these findings.
The terms hybrid, blended, flipped, and inverted are inconsistently defined in the literature creating a barrier to efficient research on and implementations of these types of classes. This paper examines existing definitions of these new types of courses and uses those definitions to identify two dimensions critical to differentiating types of courses: how instruction is delivered to students and what type of instruction students receive. The paper then addresses how these dimensions were used to create a taxonomy that defines hybrid, blended, flipped, and inverted classrooms. The taxonomy focuses on learning experiences in which students receive instructional guidance either directly from an instructor or indirectly from an instructional designer (e.g., through educational software); therefore, some elements of courses, such as unmonitored problem solving, are not specified.
In its American incarnation, accreditation exists because of a confluence of two otherwise unrelated historical trends. The first involved the massive outpouring of philanthropy to institutions of higher learning at the beginning of the 20th century. Shocked by the dismal state of university administration and accountability, industrialists like John D. Rockefeller and Andrew Carnegie demanded minimal standards as a condition for receiving grants and gifts. These were men of industry who were enamored with industrial management practices, including quality control and measurement. The second trend was spurred by the massive increase in enrollments in the mid‐20th century, increases that threatened to overwhelm the nation’s colleges. The solution was to make institutions more efficient. Efficiency in post‐WWII America meant factory efficiency, and so colleges and universities adopted the methods of the factory floor.
There is a collapse of confidence under way in U.S. colleges and universities. It is a collapse that has been documented in what seems like a steady stream of recent reports and books, including my own. Amid the many dire warnings there is one bright thread: advances in information technology are often viewed as a pathway to rebuilding public confidence in higher education by reducing costs, expanding access, improving outcomes, and increasing financial transparency. If technology could help rebuild public confidence, higher education would be better off for it, but without more engagement from the research community in attacking the problems facing the nation’s colleges and universities I am not optimistic that will happen.
In recent years, states have implemented system-wide programs, including the University System of Georgia’s STEM Initiative, to enhance postsecondary science, technology, engineering, and mathematics (STEM) education. This paper presents the results of a review of the scholarly literature and a national Internet survey undertaken to develop a catalogue of state-level STEM enhancement programs, focused on program objectives, demographics, programmatic components, and outcomes. Forty-two states have developed such programs, thirty of which focus specifically on P-16 STEM education.
As articulated by Etienne Wenger (1999) and other scholars, the ”community of practice” (CoP) represents a useful organizing concept for enhancing collaboration, sharing knowledge, and disseminating best practices among researchers and practitioners in postsecondary education. In this document, we outline the potential of developing online, virtual CoPs using web-based tools such as Microsoft SharePoint. Technology alone is not sufficient, however, and our recommendations underscore the need for organizational support and individual participation.
If current economic, social, and technological trends continue, it is increasingly likely that the typical “University” of the future will not look like the present day institutional arrangement. This paper explores disruptive forces impacting the delivery of post secondary education and speculates on potential structure and impact on 21st Century Universities, focusing on approaches, partnerships, and technologies that will drive development of future venues for higher education.