
Inside NVIDIA's Alpamayo 1.5, NuRec, and AlpaDreams: A GTC Conversation with Matt Cragun

Michael Rubloff
Apr 17, 2026

At GTC this year, NVIDIA rolled out a wave of updates spanning its autonomous vehicle stack, its neural reconstruction tooling, and the generative simulation research sitting between them. Just months after announcing Alpamayo at CES, the team is back with Alpamayo 1.5, a broader public release of the NuRec SDK and container, an Open Claw to Gaussian Splatting workflow shown during Jensen's keynote, and a research preview called AlpaDreams that pushes Cosmos toward closed loop simulation.
To unpack what shipped and why it matters for developers working on radiance fields and simulation, I sat down with Matt Cragun, Director of Product at NVIDIA, who oversees simulation, tools, and models for autonomous vehicle development. We covered what's new in Alpamayo 1.5, how the NuRec SDK is being broken into modular libraries for the broader reconstruction community, why neural reconstruction continues to chip away at the domain gap, and how AlpaDreams begins to close the loop between policy and world model.
The full conversation is transcribed below or you can watch the full conversation here.
Michael Rubloff: All right, well, welcome to GTC. I'm really honored to be here with Matt Cragun, who I will let you introduce yourself in just a second. It's pretty crazy that it feels like just a couple days ago that we were at CES where NVIDIA announced Alpamayo, and today you have a new series of announcements. But I was wondering if you could just start out very quickly by introducing yourself and telling us a little bit about what was announced today.
Matt Cragun: Sure. So I'm Matt Cragun. I'm a Director of Product at NVIDIA, and I handle all of our simulation and tools and models for autonomous vehicle development. Today we actually have quite a few things that we're bringing to GTC this year, which we're really excited about. Probably one of the bigger ones is Alpamayo. We announced Alpamayo 1.5 today. That's a follow-up for us from Alpamayo 1, which we announced at CES. We actually have had a lot of really great feedback from the community already about Alpamayo, and so we were really excited to be able to take that feedback, roll it into some new features and some updates of the model, and to get it back out to the community as soon as we could.
Michael Rubloff: Yeah, it's amazing. I think I saw that there's over 100,000 downloads on Hugging Face already for Alpamayo, and it's one of the most installed AV data sets. To kind of go off of that too, Alpamayo is comprised of the model, the data set, and also a simulator. I was wondering if you'd be able to talk a little bit about what was changed or updated in this release across those three.
Matt Cragun: Sure, I'd be happy to. So Alpamayo, just like you mentioned, is three different things, and it's really an ecosystem of products that we have so that developers can build reasoning autonomous vehicles. The three ingredients that any developer really needs to build an autonomous vehicle is a model, some data, and a simulator. The main updates for GTC this year were around the model itself. So Alpamayo 1.5 is still a 10 billion parameter model, but we've added updates so that users can interact with it more in terms of navigation commands and directing Alpamayo through prompts — ways that they previously weren't able to be able to do.
Michael Rubloff: And some of these ways were like, you know, you could ask it to take an upcoming exit, or ask why — what it's thinking as it's coming to a stop sign. Things like that.
Matt Cragun: That's correct. So Alpamayo, generally speaking — in Alpamayo 1, you're able to give it some video frames and it will give you a trajectory output, but it will also give you the reasoning traces, the reasoning logic behind whatever it was that it chose to do. And so now we're giving you the ability to give navigation input. This is one of the things that we heard from a lot of our developers: that in order to deploy it, they need the ability to be able to guide the model to make specific decisions. So if you'd like it to turn right at the next corner, you need to be able to guide it towards that decision so that it knows what you're expecting from it.
Michael Rubloff: And as part of the releases that were announced today, I think there was also a couple related to NuRec, which stands for neural reconstruction and is also pretty important to creating these very lifelike dynamic 3D reconstructions that these models can interact with. I was wondering if you'd be able to give us a little bit of an overview about both the NuRec SDK as well as the NuRec container, and why that's so important for simulation.
Matt Cragun: Yeah, that's great. We're really excited about NuRec. We've made a lot of progress. We've been working on it for actually several years, and this year at GTC we've announced basically making almost everything that we have publicly available to people. There's a few key components. NuRec is really a very horizontal technology for us. So Alpamayo is really designed for the autonomous vehicle space, but NuRec is a very horizontal technology. That means that it serves a lot of different customers and a lot of different use cases. We've seen lots and lots of really great traction inside of the automotive use case, so it gets used really heavily there, but we also have been doing a lot of work with robotics. Also some really cool things with healthcare, and just a number of other industries that are really kind of jumping on the train with NuRec.
A couple of the key things that are worth pointing out: first, that we are releasing what we call the NuRec SDK. We've built effectively complete workflows with neural reconstruction, but what we want is — there are a lot of developers that have already invested a lot of time and energy into building really important reconstruction pipelines for themselves and the applications that they have. And so the SDK is really a set of tools and modules that can be broken into much smaller libraries that can be ingested piece by piece into your pipeline. So if you need a complete pipeline, we have the complete pipeline. If you need just some pieces of it and you need some accelerated libraries, if you need accelerated loss functions, we have that. And so we are working with the development community to be able to make a lot of these components, as many as we can, open source to the community.
We've also released some cool models that we're really excited about. So we have the Asset Harvester as an example, which is something that we use quite a bit in automotive, where we drive down the street, we observe a vehicle or a pedestrian or any type of other thing along the street that we think might be interesting inside of a scenario. And we can actually take it from several views and reconstruct it into a 3D Gaussian shape. And then we can reinsert it back into the scene. We can either put it back into the same place where it was before — and that gives us much more flexibility and freedom in terms of how far we can push our novel views — or we can insert it into other scenes where it hasn't existed before. And so that allows us to create scenarios that are new and different that we didn't actually experience on the drive.
Michael Rubloff: And to go off of that, one of the challenges that I know has been a long-standing one in robotics is this issue of the domain gap and being able to really close this gap. I was wondering if you could also explain why NuRec and these different radiance field representations are helping close this domain gap.
Matt Cragun: Sure. Yeah, so the domain gap has been a problem. I've been doing simulation for a long time, and the domain gap just exists any time where you have something that you've simulated and for one reason or another it's different than reality. A really good way to think about it is kind of the uncanny valley but for robotics — this domain gap where what you observe in the simulation doesn't exactly match, for some reason, something that occurs in the real world. This sometimes exists with meshes and textures or physics or other things where what we represent in the simulator just can't quite match reality.
One of the reasons that we really like — and NuRec has been so effective for us — is that it gives us a very high quality, very high fidelity representation, because we're capturing it with the cameras that we actually want to use, especially in automotive. The vehicle that drives down the street captures data and then we can reconstruct it exactly. So you get all of the data; you don't have kind of a defeatured version of the world.
The other side to that is that when we build things with meshes and textures, we have a lot of power to build exactly what we want, but the amount of detail that we put into those scenes is really dependent on how much time and money and effort that you want to spend to build those things. And so one of the really great things about NuRec and reconstruction is that it's very, very scalable. We can just go out and capture data and we can immediately bring that in, and the geometry, the look, the assets — they all match what we have in the real world. So you get kind of the same level of detail both in terms of content and in terms of visual pixels.
Michael Rubloff: Right. And so it's drastically reducing the amount of time to simulation. So if something happens in the real world and you want to go back and simulate it, or if there's just a scenario that you know you need to account for ahead of time, it's really a great way to cut down this time to simulation. Going through Jensen's keynote today, he briefly touched upon Open Claw, and as part of that he quickly showed a workflow going from Open Claw to Gaussian Splatting. I was wondering if you would be able to talk through a little bit about how that functions.
Matt Cragun: Yeah. So one of the really great things about Open Claw — for those who aren't familiar with it — is basically an orchestrator of LLMs and other types of agents. It's a really powerful tool where, rather than writing code, you can give it a task, and then using essentially the LLM agents that it has at its disposal, it can start to execute on that task. It has a really nice structured way about which it goes about getting from point A to point B in your tasks.
The reason that's useful for us is that when you use it, you have a really great tool to start automating workflows. One of the things that we observe is that the workflows that we want to build are really based around the APIs and the data exchange between things. The workflow that Jensen showed during the keynote — we went through the process of giving it access to the APIs for our data search tool, and then gave it access to the APIs for our reconstruction tools, and then we gave it access to our APIs for our models — things like the Fixer, which was released previously, and the Asset Harvester — and then gave it access to the APIs for Cosmos. We just gave it a task, and we said: what we want you to do is generate synthetic data that we could use to train something like Alpamayo.
It's really exciting to be able to see the agent basically structure its thoughts and be able to start pulling together these different components and putting them into more and more complete workflows. We really expect in the future that the way that we design both our tools and our libraries is going to be really fundamental to the way that agents want to consume those things. And if they're designed well, the agents will have a really nice set of tools where they can go and accomplish whatever task it is that you want them to do.
Michael Rubloff: You mentioned having faster time to simulation. This again just continues to drop the barrier to being able to really effectively and accurately simulate information. I'm really excited actually to go and try that when I return home. One of the other things that I'm pretty excited to try out is that they also just announced AlpaDreams today, which I know is more on the research side.
Matt Cragun: Yeah, that's great. I can mention a few things, and hopefully you'll have some time to meet with our research team later in the week. AlpaDreams is some of our latest research that we announced and showcased. It's both down on the demo floor, which is nice — you can go and experience it if you want to — and we also have a nice blog. But the sum of it is that there's a lot of opportunity to test things like Alpamayo. One of the challenges that we have in simulation is being able to build the right types of simulation environments. Alpamayo is really designed such that it can logically think through complex scenarios. But the challenge that we have in terms of development is building simulations of really complex scenarios.
We often use the term in the automotive industry of "edge case." Edge cases are just those events that are really rare, things that you don't experience very often, but an autonomous vehicle still needs the ability to negotiate through those challenges. So the question is: if you're building a system and you want that system to be able to handle really hard things, then how do you simulate those really hard things if you don't actually observe those in day-to-day life? This is where generative AI tools become really, really powerful. They give us the ability to effectively build whatever it is that we can imagine.
In simulation, we have two types of simulation. There's what we call open-loop simulation. This is basically just where you can start generating pixels and scenarios, and you can even let the policy do some inference on those things. But where it really gets interesting is when you do closed-loop simulation. Closed-loop simulation is where the simulator and the policy both interact with each other. What that means is that as the simulator starts to establish and render a state of the world, the policy then receives that state of the world and makes a decision. Then the decision that the policy makes is reflected in the simulator itself. In other words, just as you interact with the world on a day-to-day basis, the world also responds to your actions. So we want the same thing in a simulator.
The reason that I'm bringing this up is because Cosmos — the models that we have available today — we're not able to do closed-loop simulation. Typically what we do today is we give the model a prompt, and then that model can perform a rollout and can generate frames for nine or 10 or 20 seconds, whatever it is that you ask it to do. But what the models can't do today is respond or change their behavior in response to some other stimulus.
So what we do today with Cosmos in AlpaDreams is we basically adjusted Cosmos such that the next sequence of frames are both dependent on the history — what it's rendered or what it's done so far — plus any new information or state that you want to provide to the model. This is where we get to closed loop. Cosmos rolls out a few frames, and Alpamayo receives those frames and makes a decision, and then it sends its driving action back to the simulator. Then Cosmos, based on what the policy just decided to do, simulates the next set of frames, just a little bit forward in time, and sends those back to the policy. So we go in this closed loop back and forth, and both the policy and the simulation — or the world — are rolled out together.
Michael Rubloff: Nice. I'm actually very excited to go and see for myself out on the show floor. But yeah, I really wanted to say thank you for taking the time here at GTC to talk with me about all the things Alpamayo and simulation.
Matt Cragun: Yeah, happy to do it. Glad to be here.
Taken together, the announcements sketch a fairly clear direction for NVIDIA's AV stack: a more steerable reasoning model in Alpamayo 1.5, a NuRec SDK that meets developers wherever they already are in their reconstruction pipelines, agent-driven workflows that wire data search, reconstruction, asset harvesting, and Cosmos together through APIs, and an early look — via AlpaDreams — at what closed-loop generative simulation could mean for edge-case coverage.
For the radiance fields community specifically, the throughline is that neural reconstruction is no longer just a capture and render story. It is becoming a foundational layer for synthetic data generation, asset reuse across scenes, and eventually closed-loop world models. Thanks to Matt for taking the time at GTC, and we will be following the NuRec SDK and AlpaDreams releases closely on RadianceFields.com.





