Could AI be taught multiple skills at the same time? Is immersive holography more realistic than ever? It is impossible to predict the exact future of artificial intelligence (AI). But one way to get a glimpse is by looking at the research that Nvidia will present at Siggraph 2022, to be held August 8-11.
Nvidia is collaborating with researchers to present 16 papers at Siggraph 2022, spanning multiple research topics that impact the intersection of graphics and AI technologies.
One paper details innovation with reinforcement learning models, done by researchers from the University of Toronto and UC Berkeley, that could help to teach AI multiple skills at the same time.
Another delves into new techniques to help build large-scale virtual worlds with instant neural graphics primitives. Stepping closer to technologies only seen in science fiction, there is also research on holography that could one day pave the way for new display technology that will enable immersive telepresence.
“Our goal is to do work that’s going to impact the company,” David Luebke, vice president of graphics research at Nvidia, told VentureBeat. “It’s about solving problems where people don’t already know the answer and there is no easy engineering solution, so you have to do research.”
The intersection of research and enterprise AI
The 16 papers that Nvidia is helping to present focus on innovations that impact graphics, which is what the Siggraph show is all about. Luebke pointed out, however that almost all of the research can be applied to AI outside of the graphics industry.
“I think of graphics as one of the hardest and most interesting applications of computation,” Luebke said. “So it’s no surprise that AI is revolutionizing graphics and graphics is providing a real showcase for AI.”
Luebke said that the researchers who worked on the reinforcement learning model paper actually view themselves as more in the robotics field than graphics. This model could be used by robots and any AI needing to do multiple tasks.
“The thing about graphics is that it’s really, really hard and it’s really, really compelling,” he said. “Siggraph is a place where we showcase our graphics accomplishments, but almost everything we do there is applicable in a broader context as well.”
Computational holography and the future of telepresence
Throughout the COVID-19 pandemic, individuals and organizations around the world suddenly become a lot more familiar with video conferencing technologies like Zoom. There has also been a growing use of virtual reality headset usage, connecting to the emerging concept of the metaverse. The metaverse and telepresence could well one day become significantly more immersive.
Nvidia will be presenting Siggraph papers on a topic called computational holography. Luebke explained that at a basic level, computational holography is a technique that can construct a three-dimensional scene, where the human eye can focus anywhere within that scene and see the correct thing as if it were really there. The research being presented at Siggraph details some new approaches to computational holography that could one day lead to VR headsets that are dramatically thinner than current options, providing a more immersive and lifelike experience.
” This has been a kind of holy grail computer graphics for many years and years,” Luebke stated about computational holography. “This research is showing that you can use computation, including neural networks and AI, to improve the quality of holographic displays that work and look good.”
Looking beyond just the papers being presented at Siggraph, Luebke said that Nvidia research is really interested in telepresence innovations.
The post Nvidia AI research takes science fiction one step closer to reality appeared first on Venture Beat.