ETEC643 - Traditional Research
Description of The Project
The California State University, San Bernardino Department of Academic Technology and Innovations (ATI) has developed and created a virtual reality (VR) learning lesson for introductory anthropology courses on campus. An anthropology faculty member developed the VR lesson and the learning objectives of what the students should learn and achieved through the process. Hence, the VR lesson meets the departments learning objectives for the introductory anthropology classes. The main goal of the lesson is to introduce the students to the basic knowledge on how to do an archeological survey. As a result of this virtual reality project, I got interested in how VR would affect student learning. Therefore the focus of this study was to evaluate the effectiveness of a VR lesson in student learning compared to a traditional lesson that is given in a face-to-face environment. I wanted to compare what effect will be replacing a regular face-to-face lesson with virtual reality on students’ grades during the midterm exam. Therefore the research question that I wanted to answer is:
RQ1: Is there a difference between traditional instruction and instruction using Virtual Reality on student test scores?
This research study was a quasi-experimental design since I was using a class environment. The study was designed to have a control group and a treatment group to measure the impact virtual reality has on student test scores. To minimize any confounders that might skew the result one way or another there was a random assignment to the control and the treatment group of students that fit different categories e.g. (a) Age (b) GPA (c) Level in the program (freshman vs. non-freshman) (d) Familiarity with video games; (e) Familiarity with VR in particular (f) Interest in the topic of the class. The goal was to have the two groups have similar characteristics.
For this sample questioner that was distributed at the start of the class, we had 141 responses from those 106 gave consent for participation in the VR lesson. Of the 106 participants, 51 students were selected to participate in the study. At the end of the study, we had a total of 38 students who participated and showed up to the VR lesson (treatment group.) The treatment group was then compared to a similar group of students who took the lesson in a face-to-face environment (control group). To gather our data, we compared the midterm test result from the treatment group and a control group.
During the data collection, we found some errors in reliability that emerge in questions (36-38) as some of the students choose multiple answers for questions that were meant as a single answer, and other questions were somewhat ambiguous. Both groups had similar issues in answering the multiple-choice questions and the multiple answer question. To analyze the midterm test results, we used a nominal measuring scale. In this study, we assigned numbers to each of the questions based on the questions being correct or incorrect.
In the study, we will use a pre-survey to create the treatment group. This survey will be used to mitigate the confounders in the study. This tool will help us reduce and identify any distorted association that may skew the research. Then we will add four test questions to the midterm. The test scores of the question results will then be compared between the treatment and the control group. Since we have a mixture of types of questions, multiple-choice, write-in questions, and select multiple answers, we created a nominal scale to help us with the accuracy of the test results.
I this study the students in the VR group had more incorrect answers in the midterm. The VR group had a lower score compared to the control group. As a result of the implementation of the t-test, we had to accept the null hypothesis of:
H0 = There will be no difference of the test scores between the traditional approach team and the VR team.
Overall, the study found no statistically significant difference between the two groups. Even though the mean score dropped for the VR students and the traditional group, the change is not significant, and it can easily be a result of measurement errors or sampling errors.
This study could not reject the null hypothesis. The design of the study and the delivery of the virtual reality lesson might have contributed to the results. One possible issue in the study was that the researchers did not initially introduce the VR students to the learning objectives the students had to achieve during their virtual reality experience, and the students needed more exposure to the new technology.
On the other hand, it was not enough that the Ambrosia VR lesson had the same learning objectives as the traditional class, we needed to develop a more cohesive VR learning lesson that scaffold the learning before and after the virtual reality experience. VR should be implemented as part of a scaffold learning lesson and not as a replacement for a lesson. Research suggests that “providing external scaffolds that help the learners make the connections between the knowledge learned in the game and disciplinary knowledge” (Barzilai, & Blau, 2014, p. 66) and a is necessary for the creation of an effective lesson. Similarly, the design of future research needs to “encourage learners to pause, reflect, and revise” (Horton, 2012, p. 329.) Debriefings or reflective assignments “can help learners create critical links between game representations and events and “real world” representations and events” (Barzilai, & Blau, 2014, p. 66). All the concerns above will need to be addressed on future research to increase the success of the virtual reality learning lessons and experiences.
Throughout this process of traditional educational research class, I gain a lot of practical knowledge on how to do research. In this class, I learn how to create an appropriate research question, create appropriate research tools, and analyze the results. This whole process was challenging and I learned the most from the results gathering stage. In the gathering stage, I saw the mistakes I made during the deployment of the research tools and how sometimes the slightest oversite can interfere with the results. I learned how to minimize the impact of that error and still have a valuable and meaningful result. The best part is that even if I accepted my null hypothesis the research was not a failure and the result can lead to other meaningful research.
Barzilai, & Blau. (2014). Scaffolding game-based learning: Impact on learning achievements, perceived learning, and game experiences. Computers & Education, 70, 65-79.
Horton, W. (2012). E-learning by design (2nd ed., Pfeiffer essential resources for training and HR professionals). San Francisco, CA: Pfeiffer.