Meet the Speakers

Some questions about AR & VR to:

 

Patrick Ehlen (Loop AI Labs)

An interview with Anett Gläsel-Maslov

Patrick Ehlen, Ph.D., lives in San Francisco and is Chief Scientist at Loop AI Labs. He specializes in representation learning for semantics, pragmatics, and concept acquisition. He worked on methods to extract concepts and topics from ordinary spontaneous conversations among people as part of the DARPA CALO project at CSLI/Stanford. He has produced 45 research publications in the areas of computational semantics, cognitive linguistics, psycholinguistics, word sense disambiguation, and human concept learning. WOW!

Get ready for some beef!

 

Patrick, you are Chief Scientist at Loop AI Labs. What is your background in general? And what was the vision to create a business around the artificial intelligence concept? What is the mission of Loop AI?


I never studied computer science, but because of my father's work we had a rather hefty computer at home at a time before most people owned them, and I learned to program from an early age. After I read Arthur C. Clarke's novelization of 2001: A Space Odyssey in third grade, I became fascinated by AI, and learned all I could about it, trying to program what I guess would now be called "chatbots," among other things.



But when I went to college, it was during one of those infamous "AI winters" when there weren't many avenues available to study artificial intelligence. One exception was a pair of just-released books I found at the university bookstore on "Parallel Distributed Processing," a new theory of computation that attempted to model the human mind using neural networks and connectionist architectures. So instead of computer science, I studied cognitive psychology, with a focus on language and communication, and ultimately zeroed in on the cognitive processes that underlie how humans communicate about concepts. How do we "transmit" concepts from one mind to another, and how do we negotiate our myriad perspectives and understandings of the world with each other? 



So I pursued a PhD in cognitive psychology at the New School for Social Research in New York, while simultaneously working on NLU projects at AT&T Labs in New Jersey, and then journeyed out to Stanford for a post-doc at CSLI doing computational semantics. A few years later, Bart Peintner approached me about a startup he had co-founded, part of which called for some automatic understanding of novel concepts that show up in natural language texts. It seemed a natural area to pursue the question of whether a machine could acquire new concepts in the same way that humans do. And if it could, then a machine could do all kinds of things in business that only humans -- with their rich innate communication skills -- could do previously. "Helping machines understand the human world." That is our mission, in a nutshell!

 

DIGILITY focuses mainly on immersive technologies such as AR/VR/MR etc., but in the future none of these developments will probably stand alone. Artificial Intelligence, machine learning, huge amounts of data are part of it and necessary to optimize computing processes. Nowadays, we still see the limits of these technologies and there is a long way to go, for sure. But from your perspective, which will be the most driving components to get satisfying results for immersive realities?

To me, the greatest potential of Augmented Reality and its variant levels of immersion lies in its vast potential to manipulate and use context. Using VR, we can manipulate a person's experience of context in ways the human mind would never experience in the real world, and that has vast, untapped potential that we've still only scratched the surface of. I can imagine, for instance, greatly speeding up the time it takes to become fluent in another language by offering a VR or AR manipulation of context that helps us ingest information at triple the real-world rate.




On the flipside, VR and AI can be used together to process context at a scale that no single human mind could ever do, synthesizing grand "perspectives" of many minds into a coherent whole. For instance, you can go to a concert now, and afterwards go online and experience snippets of alternate perspectives of the same show provided by people posting snapshots of their experience. This is pretty cool, but it's only a patch-quilt approach. What if we could develop an intelligence that processes the entire experience from all perspectives, and then provides you with a synthesis perspective that no single person could ever experience?


To do this, I'm pretty sure we'll need some hefty processing power, as the tensors needed to accommodate all these perspectives would be of very high rank. I personally believe they will also require some math that you don't typically find in today's linear algebra packages, and quantum computers will help with that.

 

From the business perspective, which trends in virtual and augmented reality are the most promising to you?



If you look at progress in business in the 20th century, you see a clear trend of accelerated progress happening when there are advances in two areas: communication, and, for lack of a better term, "methods of abstraction" that allow businesses to see their world of interest more clearly, usually from the vantage of a higher perch.




For most of human history, we were only capable of conceiving of information from a limited perspective. In more recent years, the emphasis businesses have put on data science shows them pushing the envelope even further in attempting to gain larger perspectives in a principled way. VR has the potential to push even further, allowing businesses to see the world of their business interests better, and to provide clarity to customers in a way that was not available before.

 

And where do you see the main shift from being a gimicky and entertaining technology towards a valuable and an impactful, life-changing technology?

History has demonstrated time and again that cruelty and violence are most likely to occur among people who fail to assume each other's perspectives. Traditionally, communication channels among different groups of people have been very narrow, so the potential to expose others to an alternate way of viewing the world -- a different culture -- has been pretty limited. Hence, a long history of violent clashes.




But there are good reasons to believe this does not need to be the human condition. As the world grows "smaller" -- thanks to television, the internet, and expanded methods of communication -- we see people drawing more diverse perspectives into their understanding of the world, and they become less prone to cruelty and violence as a result. So the real transformative promise of AR & VR is to expand that capacity of understanding throughout the human race more easily and thoroughly than ever before.


To give a small example, when we drive cars, we are often unpleasant to each other in a way that we would never be in person, because we're forced to deal with a channel of communication that is severely limited compared to normal. Face-to-face, we can mutually experience the context in which events are happening, and we have many subtle ways of sharing our mental states. 



When a car suddenly stops in front of us for no apparent reason, we get frustrated because we don't experience the context in which this action has happened. The ability of cars to communicate their states and intentions is greatly impoverished: turn signals, brake lights, flashing the headlights, tailgating, waving your hand out the window, or just nudging your car over towards the car in the next lane... these are the only signaling mechanisms we have at our disposal. But what if we made cars that allowed drivers to read each other's perspectives and intentions much better, more like they do in face-to-face interactions? Perhaps an AR windshield now provides "x-ray vision" to peer through that car in front of me and see that the driver is braking for a small dog that ran into the road. With this expanded awareness of context, my consciousness of the perspectives and intentions of others is now also expanded, and I am less likely to react in the aggressive way that ignorance inspires.


...Or maybe they are just reading their phone and deserve to be honked at! 


 

With whom of our other DIGILITY speakers could you imagine to go on the holodeck and which experience would you like to walk through with her/him?  

Kimo Quaintance and I would go on the holodeck and re-enact the light saber battle on the Death Star. He gets to be Darth Vader, since he's taller. 


 

In two sentences: WHY DIGILITY?

Some of the smartest minds in the world talking about some of the most interesting aspects of the future. Why would you NOT want to come??

 

Great answers, Patrick! We’d love to see you and Kimo on the dark side. ;-)

 

More information about DIGILITY and the direct way to our ticket shop: www.digility.de