Although augmented and virtual reality systems (which I will refer to globally as extended reality, or XR) have not yet reached the level of everyday ubiquity expected by some industry insiders, these technologies are being increasingly utilized in domains traditionally considered to be risky, such as aviation and medicine. This is Part 1: AI Smart Avatars.
In fact, before making the jump from academia to consulting, I worked almost exclusively on VR surgical simulators and medical training applications. Surgeons traditionally move from more declarative instruction to field practice on real patients and, while they are certainly well-trained, there have always been limitations between reading about a procedure and doing it. I’m sure you don’t have to imagine too hard to come up with a number of ways that could be suboptimal. That being said, the applicability of XR to risky procedural domains like surgery should be pretty apparent.
And it is. Enough so that more and more developers in these application areas are emerging every year all over the globe. Although the emergence of such development teams is excellent for the state-of-the-art, the downside is that it is becoming increasingly difficult to track each new innovation in XR across every use case in every domain. This is problematic because, until consumer XR takes over the lion’s share of the market, the most promising advancements are likely to emerge from these very specific products. Luckily, experts in the field know this and have made efforts to make sure that all practitioners in XR can get a taste of the newest developments at events like the 2018 Virtual Reality and Healthcare Symposium at Harvard Medical School. Although the name may suggest that the event was only of interest to those in the medical field, it brought together many international members of the XR community from every domain to share and explore each other’s work. And, thanks to our friends at Stanford and VR Voice, we were invited to take part in this incredible meeting of minds. As you might imagine, there was a LOT discussed at the event. So I’d like to share a few highlights that speak to XR’s future over the next few weeks, starting with...
In a world where connecting to someone is as simple as opening an app, it is only a matter of time until XR reflects the need of people to interact both locally and internationally. However, the level of detail and interactivity that was acceptable in past interactive experiences is unlikely to stand up in XR. For example, If VR chat rooms are international, how will we be able to talk to each other without significant lag? Will our avatar selves only be useful if we are logged in and actively participating or can we have “away messages” a la early 2000’s AIM? How does one keep that representation consistent across all forms of media?
Companies like ObEN are already working towards answering these questions and more. ObEN is creating next-generation personal artificial intelligence that can interpret how people look, behave, talk, or even sing to act more like users, even when they’re not around. These smart avatars are blockchain-based and can handle interactions when the user is AFK, imitate the user’s voice in multiple languages, or even be exported into other applications to create a consistent user presence. Such an AI-controlled avatar could make real-time, cross-cultural communication easier than it has ever been, allowing users to speak to people halfway around the world in their native language and even “interact” while they’re offline.
Speaking of blockchain, stay tuned for next week’s installment to learn more about how federated public databases may color the future of XR presence and identification...
This is VR AR XR Part One AI Smart Avatars. The first blog in our three part series. Check out the next installments.
Interested in working with us? Check out available positions.