Founder & CEO, DeepMotion
Prior to founding DeepMotion I spent over a decade and a half in the gaming industry in various leadership and engineering roles, including being the game CTO of Disney’s midcore mobile game studio, technical director of ROBLOX, senior developer of World of Warcraft at Blizzard, and a technical lead at Airespace (now Cisco Systems). I’ve shipped multiple AAA titles, including World of Warcraft, StarCraft II, Star Wars Commander, and ROBLOX. I’m always looking for new ways to push the boundaries of interactive technology through physics and AI.
I am the Founder and CEO of DeepMotion, a startup based out of San Mateo, California. DeepMotion is a pioneer in the emerging field of Motion Intelligence. We are building tools for lifelike graphics using physical simulation and artificial intelligence. Our mission is to enable interactive content and expand creative capability by revolutionizing real-time, procedural simulation. Our products include a suite of 3D Motion Intelligence solutions for interactive AR and VR Avatars, autonomous interactive characters, computer-vision driven motion detection and reconstruction, and a 2D Animation Software, Creature.
Porting the interactive dog demo over to the Oculus DK2 was the catalyst for us pursuing core interaction technology—something that has always been an undercurrent in my work. I got my start in the industry as a programmer for Blizzard working on the engine of World of Warcraft. Even then, it was incredibly exciting to see what a virtual world could provide through deep immersion and surreal escape. Many devs like me were dreaming of 360-degree versions of these kinds of social environments; we were already thinking about how we could enable users to navigate other worlds as freely as we navigate the real world. A central motivation in starting DeepMotion was to study and solve problems around enabling virtual interaction—interaction that feels so real, it’s indistinguishable from the physical world. Our team is now approaching that goal with our intelligent avatar system. We employ advanced physics simulation, robotics, biomechanics, and the bleeding-edge machine learning techniques to create agents that are physically interactive in real-time.
In the next five years augmented reality and other forms of interactive media will be enriching the human experience immensely. I think it will serve as an extension of life insofar that it will increase our ability to achieve and access new things by an order of magnitude.
Applying AI to the entertainment industry is a trend emerging due to the recent strides in machine learning and people’s need for inspiring new forms of entertainment. The success of projects like AlphaGo is not only a reflection of AI’s advancement, but also a result of humanity’s unstoppable pursuit to understand ourselves in the process of teaching machines to become better. We believe in applying the latest technology to not only create fun new experiences that increase enjoyment, but also to leverage these advancements for more profound applications in areas like robotics that can enrich daily life with greater productivity.
About 15 years ago I went on the “Back to the Future” ride with my family at Universal Studios Hollywood. The ride takes place in a motorized vehicle with an IMAX movie projected onto a dome screen. I was amazed by the total sense of immersion and immediately struck by the potential that feeling had for the world of entertainment. That was my first Extended Reality experience. It was much later, in 2016, when the company bought our first VR headset (the Oculus Developer Kit 2) and ported one of our demos into VR. The demo was of a physically simulated dog that you could interact with in real-time. We added some hand-tracking with LeapMotion and, for the first time, we could reach out our own hands and actually play with the virtual pet. I remember thinking, “wow, that feels real!” For a lot of people, this demo generated a phantom sensation of touching fur when playing with the dog. I had the realization then that if we can reach out and feel the presence of a pet in simulation, we could connect with other human beings this way as well. That inspired the team to expand our simulation technology to interactive human avatars.
We launched our first major public partnership this year with Samsung to bring expressive motion capture to the new Samsung Galaxy S10. We think interactive mobile experiences are a major gateway for the next wave of interactive media, so our near term focus continues to be the expansion of our Digital Avatar Solutions. We are also always working in parallel to mature our Motion Intelligence platform—an end to end pipeline for generating cheaper motion data and decoding the intelligence of real-world motion for animation, simulation, vision, and robotics.
We select for passion when building our team which, in turn, becomes the first principle that motivates our team’s work.
My personal hero is my wife, who is the strongest supporter of my career and passion for virtual worlds, games, VR and motion AI—without her I would have given up many times. She encouraged me to embrace my passion as my job. That’s advice I would share with anyone aspiring to enter the space, without hesitation.
My professional hero is the fictional character Santiago from Ernest Hemingway’s The Old Man and the Sea. The idea that “man is not made for defeat” helped me to overcome fear when confronted by immense unknowns in my youth, and I’m still influenced by that notion today.