Augmented and Virtual Reality for the Humanities
For the past year, Kevin LaGrandeur and I have been developing a concept for an augmented reality application to help teach difficult humanities texts. Since we are both English professors, the likely place to start was literature. In our experience, students often struggle with the complex historical and cultural contexts with a play like Hamlet and this serves as yet another barrier to reading.
Most teachers of the play can speak to the complaints regarding the language; there are tools to help with that. Imparting context through visuals and explanations remains more complex. Often, videos are used and this can lead to student reliance on this medium in place of reading. Our goal is to compliment and encourage student reading, not detract from it by providing an easier avenue.
Initially, we enrolled in a National Science Foundation boot camp based on the merits of our concept. Though enlightening, this did not prove a viable avenue. With funding from NYIT, and help from several companies, we have embarked on creating a prototype of our app, one which focuses on the appearance of the king's ghost in Hamlet. We want students to see the importance of the clothing of both the king and Hamlet, along with the cultural context of a deceased king haunting his former country.
This will happen through student interaction with a 3D model with their phone. In augmented reality, they can place the model and then zoom in on key areas within. Tapping on the ghost will reveal key information on the former king, his armor, and the probability of his existence. Doing so on Hamlet will indicate his disheveled clothing, and how this may indicate his state of mind. There will be other actionable areas about the rampart and castle, but for now, these two are the focus.
We have decided to employ gamification elements into our application and help lead students toward a decision: Does Hamlet truly see the ghost of his father, or is this an illusion created by his own mourning and melancholy? Students will make choices and these will lead them to score points which will indicate, at the end, an implication of truth or delusion.
We have worked with Dell and Enduvo to acquire software to make the transition from modeling software to virtual and augmented environments. The road to the application is forked between virtual and augmented reality. Our initial idea was to use AR because it does not require special hardware and is cost-effective. Enduvo plans on implementing AR in their software but does not currently support it.
Enduvo has been a great help, but getting 3D models ready for export has been a major roadblock. We do not have any coders yet for this purpose, and learning these skills on the fly takes a lot of effort. It's like learning another language, except that with spoken languages, the listener can interpret and understand misspoken sentences. A computer language with mistakes just kicks back errors, or, worse, nothing, with no way to disseminate the problem(s).
With an updated storyboard and concept, I have made progress in creating a workable 3D model of Castle Elsinore, the ghost, and a poor example of Hamlet. They have all exported into Enduvo as well as Unity and Unreal. The previous problems with textures and materials have been overcome, and I have created a mockup video of how the app should perform. It's skeletal right now (no undead pun intended) but it is at least progress.
I have shifted from Sketchup to Blender for the models, and the ability to actively render in the latter program has helped, as has the easier interface for the inclusion of lighting. Who knew that the biggest problem I would have could be remedied by digitally flipping a switch?
After literally months banging my head against the wall, I have come up with a way to animate and export video from both Sketchup and Blender into a VR app. I've also learned how to work with scenes in Sketchup to create an animation that I've screen recorded. This now gives a clearer view of what the app would look like. The VR stuff is nice, although I see that as a secondary product. We still want AR, and once I find a viable outlet for that, I will work on tweaking the model and animation for that platform.
A few more major breakthroughs. The castle and rampart animations are now near complete and look much better. Although most of this is still rough, it looks good for a prototype. The goal is to allow students and professors to get a feel for what the app might do, and to tell us what would work and what wouldn't. We are close to a design that can provide that. At least I am seeing light at the end of the proverbial tunnel.
Finally, a major step forward again. Now, I have created actionable areas within Sketchup that bring up text alert boxes. It's rudimentary, but it works. Now, someone can tap or click and information on the ghost, Hamlet, the castle, and even a cannon give key information that can help students better understand what is going on when Hamlet 'sees' his father's ghost. It's a mockup right now, but it provides a prototype of what we want our app to do.
I've also switched to the Unreal Engine, as the models from Sketchup tend to work better with this game engine. I've already learned how to code and move characters around, so I am hopeful I can translate what we have into some sort of playable game. This would all be a lot easier if I was a coder, but, alas, I am not. When we get the funds to hire a small coding team, I suspect we will move forward rapidly.
Another major breakthrough. I've found a new model to use as the base for our concept. This one has better textures and does a fantastic job of creating a visualization of our idea. Now that I have animations down, I really feel like this one flows and gives a good impression of what our game will be like.