8 JMU Virtual 3D Campus

Dr. Xuebin Wei

Rationale

Virtual Reality (VR) is an innovative and interactive data visualization technology that has been extensively applied in social sciences, engineering, medical studies, and so forth. Although several JMU courses introduce such technologies, students do not fully comprehend how VR scenes are constructed and how data are collected and prepared. This project aims to design a profound and essential learning practice where students can collect and process 3D LiDAR data [1], [2] and build Virtual 3D models that can be visualized in various VR devices.

This project will educate students about the fundamental theories of LiDAR data and hands-on applications of collecting LiDAR points using Sweep 3D Scanners [3]. Students will also learn and practice how to create and modify 3D models with machine learning algorithms, and to create VR scenes in Unity or Amazon Sumerian. With the assistance of previous funding, our department owns 8 Sweep 3D Scanners. However, to increase the portability of the Scanners, enhance the model accuracy, and add texture in the VR scenes, cameras and touch screens which are compatible with existing Scanners are still needed. This grant will finance the purchase of those required devices. Funds are also required to support traveling to national conferences, workshops, or meetings to present the project output and to communicate with colleagues inside or outside of JMU for possible collaborations.

Instructional Design and Implementation Plan

I had successfully taught the 3D data collection and 3D printing in the past semester when students had learned how to collect LiDAR points with the Sweep 3D Scanners, created 3D models and printed the 3D models in the JMU 3Space Classroom[4]. A dedicated video tutorial has been produced to help students understand the technologies and get familiar with the devices.

In this project, I will teach students to create a JMU Virtual 3D Campus in 4 courses:

  1. In the 2018 Fall semester, I will introduce the basic concepts and skills in collecting LiDAR data with the Sweep 3D Scanners in the Data Mining (IA 340) course. Students will get familiar with the scanners and learn how to import the LiDAR data from scanners to computers.
  2. In the 2019 Spring semester, I will teach students to create 3D models with the collected LiDAR points in the Data Visualization (IA 342) course, and print the 3D models in theJMU 3Space Classroom. Meanwhile, I will introduce Unity and Amazon Sumerian to students, and teach students to create virtual scenes from the created 3D models. The created virtual scenes can be visualized and explored in various VR devices.
  3. In the 2019 Fall semester, students taking the Machine Learning (IA 480) course will learn how to polish the 3D models with machine learning algorithms and add textures in the virtual scenes. The purchased cameras will be integrated to help the identifications of 3D objects and the creations of textures in virtual scene. The portable touchscreen will allow students to deploy their machine learning algorithms to create 3D models in the real time.
  4. I am also planning to teach a JMU X-Lab course in the 2019 Fall semester where interested students with different background are going to create a JMU Virtual 3D Campus. I will offer a dedicated hands-on and team-based course that enables students to concentrate on LiDAR data collection, 3D modeling, object extraction, and virtual reality visualization within one semester. Students will spend extra time collecting LiDARand creating 3D models around the JMU campus, experiment with different type of machine learning algorithms to recognize 3D objects, and add interactive functions on their virtual scenes.

Project Transferability

The project utilizes lost-cost devices and will provide python codes and video tutorials which can be quickly applied to other disciplines, such as architecture, geography, medical training, and so forth. Therefore, this project can be smoothly transferred to other faculty members who want to teach similar courses.

Innovative and Creative Teaching Outcomes

Students will gradually learn and comprehend the VR technologies through 3 following courses or one intensive X-Lab course:

  1. Upon completing the IA 340 course, students are expected to understand the concepts of LiDAR data and be able to use 3D scanners to collect 3D LiDAR points.
  2. Upon finishing the IA 342 course, students are expected to get familiar with 3D modeling and 3D printing and can create 3D models and simple virtual scenes from the collected LiDAR points.
  3. Upon finishing the IA 480 courses, students should comprehend the basic usage of machine learning algorithms and can distinguish 3D objects by integrating LiDAR points with camera images, and be able to improve the accuracy of their 3D models and effects of their virtual scenes.
  4. Upon finishing the X-Lab course, students are proficient in LiDAR scanning and can design their machine learning algorithms to separate objects from LiDAR points or 3D models and be able to create an interactive virtual scene, such as a JMU Virtual 3D Campus that can be utilized to advertise JMU to the public.

Result Dissemination Plan

In addition to publishing the project results in PressBooks, I will share all the course materials, such as python codes and video tutorials on my GitHub and YouTube channel. I will actively explore collaborating opportunities inside or outside of JMU by attending national conferences, local meetings or workshops. I will work closely with JMU Media Relations to discover appropriate news outlets for the student’s work or the project outputs.

Video Tutorial

Introducing how to create virtual reality scene on AWS Sumerian with models created from LiDAR points and photos. The tutorial also shows how to add a virtual host and interact with 3D objects in the VR scene.

 

Demo VR scene

Logistics and Resources Plan

This project does not depend on any other additional fiscal, teaching, space, or another resource.

Research design and data analysis plan

This project will focus on undergraduate education and doesn’t have a research plan. However, depending on the implementation and output of the project, I will seek additional collaborating opportunities, such as 4VA grants or NSF grants.

Reference

 [1]         X. Wei and X. Yao, “3D Model Construction in an Urban Environment from Sparse LiDAR Points and Aerial Photos – a Statistical Approach,” Geomatica, vol. 69, no. 3, pp. 271–284, 2015.

[2]         X. Wei and X. Yao, “A Hybrid GWR-Based Height Estimation Method for Building Detection in Urban Environments,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. II–2, pp. 23–29, Nov. 2014.

[3]         “Sweep 3D Scanning Kit.” [Online]. Available: http://scanse.io/3d-scanning-kit/. [Accessed: 12-Oct-2017].

[4]         Xuebin Wei, “Creating and Printing 3D Models with LiDAR Data,” in Creative Teaching Cases – A Collection that Inspires, Harrisonburg, VA: Pressbooks, 2018.

 

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Creative Teaching Cases - A Collection that Inspires Copyright © 2023 by James Madison University Faculty is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book