OpenxAI Developer NVIDIA AMA

Got a question for our special guest Hariharan Suresh, Senior Cloud & Generative AI Technologist at NVIDIA ?

Please login and reply below!

Tune in to the livestream at: OpenxAI Developer AMA (AI Infra, GenAI) · Luma on the 4th of July 2025.

2 Likes

With decentralized AI growing and GenAI evolving rapidly, how do you see the structure of AI startups changing in the next few years? Will early-stage founders have more power thanks to decentralized infra, or will scaling still depend on big infra players?

My Question is related to Multi-Use GPUs:
Will future NVIDIA GPUs offer dynamic modes or profiles that optimize performance for gaming, AI, or other workloads depending on user needs?

What are the most underused GenAI tools or SDKs from NVIDIA that you think developers in emerging AI ecosystems like ours should explore more?

NVIDIA’s GPUs are widely used in big AI data centers.

But what happens as AI starts running on many smaller, different types of computers (like Xnodes), where fast communication and timing are harder to manage?

How do you think your hardware like tensor cores will change to support this shift? Are you working on ways to make these kinds of systems talk to each other faster, like using RDMA-style technology?

With OpenxAI making decentralized AI infa real, do you think we’ll see AI shift toward more modular, plug-and-play services? andd is current cloud infra ready for that kind of shift?

With the rapid rise of open-source and decentralized AI platforms like OpenxAI, how is NVIDIA adapting its hardware and software stack to better support edge deployments, bare-metal setups, and non-traditional cloud environments?