NSF Awards: 1839379
Improved resources for learning American Sign Language (ASL) are in high demand. The aim of this Cyberlearning project is to investigate the feasibility of a system in which signing avatars (computer-animated virtual humans built from motion capture recordings) teach users ASL in an immersive virtual environment. The system is called Signing Avatars & Immersive Learning (SAIL). The project focuses on developing and testing this entirely novel ASL learning tool, fostering the inclusion of underrepresented minorities in STEM.
This project leverages the cognitive neuroscience of embodied learning to test the SAIL system. Signing avatars are created using motion capture recordings of native deaf signers signing in ASL. The avatars are placed in a virtual reality landscape accessed via head-mounted goggles. Users enter the virtual reality environment, and the user's own movements are captured via a gesture-tracking system. A "teacher" avatar guides users through an interactive ASL lesson involving both the observation and production of signs. Users learn ASL signs from both the first-person perspective and the third-person perspective. The inclusion of the first-person perspective may enhance the potential for embodied learning processes. Following the development of SAIL, the project involves conducting an electroencephalography (EEG) experiment to examine how the sensorimotor systems of the brain are engaged by the embodied experiences provided in SAIL. The project team pioneers the integration of multiple technologies: avatars, motion capture systems, virtual reality, gesture tracking, and EEG with the goal of making progress toward an improved tool for sign language learning.
Lorna Quandt
Assistant Professor
Welcome, visitors! Thank you for watching our video about the SAIL project at Gallaudet University. Our project is in the early stages, as we work towards creating and testing a proof-of-concept ASL learning experience in virtual reality. We have completed the motion capture recordings of ASL content, and are currently creating signing avatars from the recordings and building the interactive lessons. Once we have a working version of SAIL, we will conduct an EEG cognitive neuroscience experiment that will help us see how the "embodied learning" aspect of SAIL influences ASL learning. We welcome questions and comments on any aspect of this project. And again, thank you for your interest!
David Kraemer
Karen Mutch-Jones
Senior Researcher
Even at this early stage of the project, the novel aspects of this complex learning tool are evident, and the potential for supporting (and maybe even transforming) ASL learning is notable. Very exciting! While I know you aren't yet ready to discuss outcomes, I wonder if you could share reactions/comments from new signers who are helping you to test SAIL. Also, you mention that MoCap allows you to capture "impeccable data" from markers on the body. What types of movement (or other feedback) are you paying attention to?
Lorna Quandt
Lorna Quandt
Assistant Professor
Hi Karen, thanks for the comment! We sure hope that SAIL can jumpstart a transformation of ASL learning tools.
We haven't had any new signers test SAIL yet, because right now we are transforming the motion capture recordings into avatar characters. Until those are created, the motion capture recordings are not user-friendly--they look like dots moving on a screen (aka, point-light displays). I am really curious to hear what our first sign-naive users think--their feedback will be really valuable to future iterations of the project.
The Motion Capture data records native signers producing basic ASL content, much the same way as Hollywood studios use motion capture to record characters movements and facial expressions for use in movies. The 16 cameras capture the hand, body, finger, and face movements that are critical to ASL with extremely high accuracy. We then use those recordings to render our signing avatars. The other type of motion recording which our project uses was not really highlighted in the video, although there were glimpses of it. This method is gesture tracking, in which a LEAP gesture tracker will track the SAIL user's own movements as they interact with signing avatars, allowing them to see their own sign productions alongside those of the signing avatars.
Karen Mutch-Jones
Kate Meredith
Technologically, this is absolutely fascinating! Your video is well done. Good pace. Just the right amount of information. I have been working with students and administration at the Wisconsin School for the Deaf for a number of years to address language access issues for deaf students in astronomy. Outside some finger-spelling and a few basics, I can confidently sign "Finished? Save to desktop" So. . . I am wondering how you imagine your signing avatar helping me prep for class, assuming I am working with a teacher who signs.
Lorna Quandt
Lorna Quandt
Assistant Professor
Hi Kate, thanks for watching and I am happy to hear you enjoyed the video!
We envision that in the not too distant future, you could take brief ASL lessons through your personal VR device, like an oculus rift or even a mobile-phone based VR setup. So maybe after dinner, you'd put on your VR headset and do a few of the gamified ASL lessons. In this future iteration of SAIL, the lessons might involve points, levels, and feedback about your signing. So you could complete ASL I in VR, interacting with avatars who are demonstrating high-quality ASL content. That is our dream for SAIL!
Kristin Pederson
Hi Lorna--
Your project inspires me! In partnership with the Smithsonian National Museum of Natural History, we at Twin Cities PBS are producing "When Whales Walked: Journeys in Deep Time," a multi-platform project that includes a national PBS documentary and educational outreach which includes a virtual reality game/experience for museum spaces. (Check out our video in this showcase!) My question for you: what have you found to be the biggest challenges--and "wins"--of creating VR content? We are longtime media producers, but this is our first foray into the medium. We've been lucky to work with wonderful partners. I am interested in hearing your thoughts about production. Thanks!
Lorna Quandt
Lorna Quandt
Assistant Professor
Thanks for your interest! I am happy to hear you enjoyed the video! Great question--there are certainly a lot of challenges in creating useable VR content. So far, some of those challenges in regards to our project involve the high fidelity that we require in order to show people fluid, fluent ASL. We need very high definition on the face, arms, and every joint of the fingers, to ensure that the eventual signing avatars can produce beautiful, natural ASL. Another challenge is in getting all the different components of our system connected to one another. We have oculus VR goggles, a LEAP gesture tracker, working on a Unity-based gaming engine. There are certainly challenges with making sure every element is working in sync. Finally, we work really hard to make sure our avatars and VR environment feel natural and pleasant. It is easy to feel creeped-out or unsettled when entering a virtual environment, and we do not want to be trying to teach people when they're feeling ill-at-ease. So making a smooth, natural experience is a top priority of ours. Let me know if you have any other questions. I am excited to learn more about your work!
Sarah Haavind
Senior Research Project Manager
Hello Lorna and visitors,
I have to say I am "over the moon" with the concept of this project! Congratulations on an obviously terrific start to your work in this fascinating area. I agree with Kate that your video makes it obvious how cool the technology you are adapting will be for the work, both for your design team and eventually for learners. Surely the sign-native users you are filming are excited to see the outcome of their investment as well. Can you tell us more about the sorts of topics for learning you have in mind beyond the ASL alphabet and counting? I am envisioning that once they have the fundamentals, your avatars might be far more flexible and customizable "teachers" than humans who have the steep task of memorizing mixed in to learning a new language. Will it be possible for avatars to become like "native" speakers more quickly than a human might? Hummm, does time taken with programming and film editing make up for time memorizing? My mind is bending a little. I look forward to your thoughts.
Lorna Quandt
Lorna Quandt
Assistant Professor
Hello Sarah! I'm loving your enthusiasm!
The fingerspelling and counting were actually just demos that we used for this video :-) In this version of SAIL we're working on now, the ASL teacher will teach the user about 30 signs--things like BACON, EGGS, MILK while standing in a kitchen environment, and SWING, PLAY, JUGGLE, while doing a games-related activity. We have structured 4 brief ASL lessons, each of which contains a few target ASL signs that are loosely grouped by theme.
Our avatars currently produce ASL which has been pre-recorded from native signers in Motion Capture suits, but in the future, we hope to use the avatars to piece together signs and produce new content, sort of like you're describing. In the further out future, an avatar could use machine learning to draw upon their knowledge base and produce new content in an adaptive fashion, sort of like how Alexa and Siri can respond creatively to questions using their knowledge bases. Hope that helps you envision the future of our project! It sure is wild to think of what this type of technology may be able to do someday, in the hopefully-not-too-distant future!
Sarah Haavind
Sarah Haavind
Senior Research Project Manager
Oh my gosh you must be so excited to get up and go to work every morning!! :-D It IS wild to envision, I agree - almost as good as in the 90s envisioning this type of conversation on the new Internet haha - appreciate all the time you are taking responding to our questions and sharing more about the work. Congratulations!
Lorna Quandt
Perla Myers
This looks awesome! Thank you so much for sharing your work! Congratulations!!! The possibilities of this for the future seem amazing.
Sarah Haavind
Lorna Quandt
Lorna Quandt
Assistant Professor
I am so glad you enjoyed the video! The future of learning is indeed and exciting thing to envision!
Amy Pate
I'm looking at how avatars in simulations can be more inclusive, and we've already looked into accessibility issues with simulations, so your concept and research with ASL avatars is really exciting. I'm looking forward to hearing more about your work in the future.
Lorna Quandt
Lorna Quandt
Assistant Professor
Thanks, Amy! Signing avatars hold so much potential to increase the inclusivity and accessibility of these new technologies. I will make sure to check out your video.
Sarah Haavind
Senior Research Project Manager
One more note - the musical background to your video is also delightful. Thank you for such an immersive 2-D experience! :)
Lorna Quandt
Lorna Quandt
Assistant Professor
I am glad you enjoyed it!
Daryl Pfeif
What a wonderful and exciting use of technology across disciplines! LOVE IT !!
Lorna Quandt
Lorna Quandt
Assistant Professor
Thanks so much!
Diana Bairakatrova
Hi Lorna! Your project is fascinating and well done! I am curious how other institutions that offer ASL learning can have an assess to SAIL.
Thank you!
Lorna Quandt
Lorna Quandt
Assistant Professor
Thanks so much, Diana! At this time, SAIL is being developed and we are working towards a proof of concept--showing that all these systems can work together and provide a learning experience for users. Once we have created the proof of concept, we certainly hope to continue to improve and develop SAIL, and in the long term, we would love to share the SAIL system with schools, individual users, and the Deaf and ASL-learning communities. Maybe in the future people can download SAIL onto their personal VR systems at home!
Diana Bairakatrova
This is great! Thank you!
Further posting is closed as the showcase has ended.