Interview with Luma AI Artist in Residence: Martin Nebelong

Michael Rubloff

Michael Rubloff

Jun 19, 2023

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
Martin Nebelong
Martin Nebelong

I recently got to interview the current Luma AI Artist in Residence, Martin Nebelong. Please enjoy the interview in its entirety.

Hey Martin! Really appreciate the time! Want to start by giving your background and briefly describe what a Neural Radiance Field (NeRF) is for those who might be unfamiliar?

Hi Michael, thanks for inviting me to take part in this interview. I’m very excited about NeRFs and about their potential to change how we produce and view 3D content.

I don’t have a very deep understanding of how NeRFs work but in short NeRF stands for “Neural Radiance Field” and it’s a way to convert a set of images into a continuous volumetric representation that retains the look and feel of the scene in a way that’s much closer to the source material than what we’re used to from polygon based scanning.

As part of the conversion between images and NeRF, a small neural network is trained to interpret the images.

A LOT of work is being put into various NeRF studies and we’re seeing potential use of NeRF tech in everything from google maps to 3D productions in games and film, to generative 3D applications that will eventually allow us to generate or manipulate NeRFs.

How did you initially become interested in NeRF and what drew you to it as an artistic medium?

I got interested in NeRF’s when Alex Evans and some of his colleagues at NVIDIA published a paper called “Instant Neural Graphics Primitives with a Multiresolution Hash Encoding”. I was very impressed with the quality of the scans and of the faithful representation of the scanned scenes in their NeRF viewer. One of their test-scenes was a stuffed fox. The soft, furry surface of the fox would have been impossible to capture convincingly with traditional photogrammetry, but with NeRF’s the model stood out as indistinguishable from the original. Incredible!

I was already a big fan of Alex Evans’ work due to my involvement with Dreams by Media Molecule. Dreams is a game engine/creation tool that allows anyone with a PlayStation to create 3D models, painting, music, full games and animation directly on their console. Alex Evans was one of the key figures behind the engine that powers Dreams. And as opposed to other game engines, Dreams engine isn’t based on polygons, but on SDF’s instead.

SDF stands for “Signed Distance Fields” and like NeRF’s it’s a volumetric representation of 3D objects. As an artist who’s been using polygon modeling tools since the mid 90’s I never really got to a point where I enjoyed pulling vertices around with a mouse, and with Dreams I could sculpt with two hands using move controllers, I could push and pull shapes and build without worrying about topology, UV’s and all of the stuff that is so embedded in most workflows today even if it doesn’t really have much to do with the raw feeling of sculpting and painting that I think all tools should try to achieve.

You’re one of the first Luma AI Artists in Residence. How did it come about and what excites you about the opportunity?

In Dreams, there’s no way to export anything but images and video, so I’ve tried various photogrammetry solutions in the past with varied success. When I saw the work Alex Evans had done with NeRF’s, I knew that the tech might be the answer to my prayers. Surfaces in Dreams can be very “fluffy” and painterly, which is close to impossible to get out with more traditional photogrammetry. But as it turns out, NeRF’s does a great job with that. I tried some of the more manual NeRF solutions, but found them to be a bit too technical for the work I needed to do with them.

I then heard of the work that Luma Labs is doing in terms of making the tech more accessible and I started doing tests with their software. I’m quite active on social media, especially Twitter and got good reactions there from my followers when I posted about my Luma experiments. Luma themselves also started retweeting and liking my work and I got in contact with a few members of their team on various occasions. During our chats, the potential of an Artist in Residence position came up. Needless to say, I was very excited about the possibility of being a part of the NeRF revolution with Luma!

How has your role as an Luma AI Artist in Residence influenced your work? How do you think it might influence the art world at large?

I’m only in my second month of the residence program now, and it’s only a few days of work each month, so I’ve had limited experience with it so far. I’m constantly thinking of new ways to experiment with art and tech, and Luma and NeRF is a big part of those thoughts. I’m also working directly together with Runway who’s making generative video solutions, and I want to do experiments with my NeRF’s there as well.

https://twitter.com/MartinNebelong/status/1670410972358496261

In terms of changing the world, I think NeRFs will change how we view 3D content in the future. We’ll get used to much more realistic representations of the real world in solutions like Google Maps, in our image galleries, on websites and eventually maybe in fully volumetric video and games.

Can you walk us through your process for creating a NeRF? How do you conceptualize the idea and then transform it into a piece of art?

Most of my NeRF work so far has been technical experiments leading up to actually using NeRFs in my work. With the release of the Unreal Engine 5 (UE5) plugin by Luma, NeRFs has gone from being an interesting concept, to a very potent production tool. I have contacts that work in virtual production and I see a big potential here for using the UE5 NeRF integration. Imagine scanning sets and then recreating them on an (LED) volume stage, with additional layers added in UE5.

In general, my work is very story/mood based, and I usually consider a story or a mood before I start working on an art piece. So I try to put myself in the shoes of character that would inhabit the world I’m creating, and take it from there. Or I imagine a certain mood and try to recreate that.

What are some of the most challenging aspects of working with NeRF? How do you overcome these challenges?

The first challenge is to get a good scan out of Dreams. It's not as straightforward as with a scan from real life. The first thing is to make sure that I turn off all screen effects like bloom, lens flares, vignette, motion/camera blur etc … etc…

The next challenge is to make sure that there's both foreground, middle ground, and background elements in the scene, as the NeRF training isn't as efficient without good depth (parallax) in your scene. Once that is handled, I make sure to turn off animation of things like grass, tree crowns, clouds etc etc.

The next step then is to upload a video scan from the scene to Lumalabs.ai and to wait for the NeRF to train. 

Once the NeRF is ready, I decide if I can use a NeRF render from the website, or if I need extra customization in UE5 using the new Luma UE5 plugin. Renders through their website is pretty straightforward to make. In UE5, there'd a few things to note. One is that lighting the NeRF doesn't currently cast shadows. It does receive and react to light though, which in itself is a big additional benefit over just the regular NeRF render. Also worth noting that the plugin currently doesn't work with raytracing so for reflections you'll need to use screenspace reflections. Other than that you can use the power of UE5 and get things like depth of field (DoF) on your camera, volumetric effects like fog, clouds, fire and particles. 

https://twitter.com/MartinNebelong/status/1668566529716236288

Are there any NeRF artists or creators you’re a big fan of?

The first thing that springs to mind here is Corridor Crew. I love their exploration of using NeRFs in their film workflows. Virtual film production is one of the areas where NeRFs can be used today in its current form and Corridor demonstrates this beautifully.

You utilize Dreams for many of your creations. How do you see different platforms and methods coalescing in the future to create art? Could you share your thoughts on the relationship between NeRF and other forms of digital art?

Yes, Dreams has been the tool of choice for me for some years now. It's by far the fastest way to build 3D scenes without relying on premade assets like Megascans, 3D assets bought from online libraries and so on. I think there's huge potential in the combination of NeRF and generative AI, and I've seen some very interesting examples of combining things like Kaiber AI and Runways tools (both AI video tools) with NeRF. 

How do you think the 3D photorealism of NeRFs adds to the viewer's experience of your art?

In the case of my Dreams art, NeRFs gives me a way to get the artwork out of Dreams in a version that is as close to my original work as possible. The intricate nature of surfaces in Dreams, of paint strokes and “flecks” in 3D space, makes it very difficult to get a faithful capture in other scanning flows.

Are there any specific themes or messages you aim to communicate through your NeRF art?

Outside of a big love for sci-fi and fantasy themed art, one of my key messages that I want to convey is that 3D production doesn’t have to be technical and complex like most 3D workflows today. Creating for 3D can be playful, creative and fun. In the future tech like NeRF will help us tell bolder, more ambitious stories in 3D. Over the last few years I’ve seen tools in VR and with Dreams that break the spell of 3D tools that feel like they’re aimed at engineers, not artists. With everything that’s happening in generative AI, it’s not far-fetched to imagine a future where NeRFs are no longer just scenes, frozen in their captured form. Holodeck here we come, right?

What advice would you give to someone who wants to start creating their own NeRF art?

I would recommend that they check out Luma’s tools ofc! Due to the work I do for Luma they’re welcome to take that advice with a grain of salt, but if you look at it the tools they offer make it very easy to get into creating NeRFs. And with the recent launch of the UE5 integration, Luma has given professional content creators even bigger incentive to use NeRFs in their workflows.

What’s something NeRF cannot do yet, but are looking forward to?

One very obvious disadvantage to using NeRFs at the moment, is how they’re just frozen representations of a constantly shifting world. A world that is brimming with life and movement. Another limitation is that there’s currently no way to change the form of the NeRF once it’s been captured. Being able to re-light them in UE5 is a big step forward but obviously just one step of a potentially very long and exciting journey.

Going off that, how do you see NeRFs evolving as an art form in the future?

I don’t know if NeRF’s themselves will be an artform. I think NeRF will merge with our creative workflows and eventually they will be used in as diverse contexts as any other creative outlet.

You recently showed a NeRF adaptation of John William Waterhouse's The Lady of Shalott. How do you think NeRF can breathe new life into art and allow people to re-experience a moment?

I did, yes! That’s one of my personal favorites in my collection of Dreams works, and Luma translated that beautifully into a NeRF. There’s no way I could have gotten the painterly nature of that scene out of Dreams if it wasn’t for Luma's tool. The scene itself was built from scratch in Dreams, using two move controllers. I think in the future, we might be able to build 3D representations of 2D images through generative 3D tools. Remember the scene in Blade Runner where Deckard moves in and around in a photo? I don’t think that is far-fetched anymore.

The NeRF is stunning! Can you share any upcoming NeRF projects that you're particularly excited about and where can we find them?

I’ve got a few NeRF projects lined up with Luma that I can’t wait to share with you and the world. I can’t say too much about them at the moment, but they will be a combination of using real-world captures, Dreams captures, UE5 creation and maybe even generative AI like Runway’s Gen-1 and 2 to create small stories. Stay tuned! And thank you for maintaining this great resource on NeRF’s related news.

Featured

Recents

Featured

Research

How EVER (Exact Volumetric Ellipsoid Rendering) Does This Work?

Another Ray Tracing Radiance Field emerges, this time from Google.

Michael Rubloff

Oct 3, 2024

Research

How EVER (Exact Volumetric Ellipsoid Rendering) Does This Work?

Another Ray Tracing Radiance Field emerges, this time from Google.

Michael Rubloff

Oct 3, 2024

Research

How EVER (Exact Volumetric Ellipsoid Rendering) Does This Work?

Another Ray Tracing Radiance Field emerges, this time from Google.

Michael Rubloff

Platforms

DigitalCarbon Joins Y Combinator for Radiance Field Solutions

New company, DigitalCarbon has been accepted into Y Combinator to pursue Radiance Field reconstructions.

Michael Rubloff

Oct 2, 2024

Platforms

DigitalCarbon Joins Y Combinator for Radiance Field Solutions

New company, DigitalCarbon has been accepted into Y Combinator to pursue Radiance Field reconstructions.

Michael Rubloff

Oct 2, 2024

Platforms

DigitalCarbon Joins Y Combinator for Radiance Field Solutions

New company, DigitalCarbon has been accepted into Y Combinator to pursue Radiance Field reconstructions.

Michael Rubloff

Platforms

Chaos V-Ray 7 to support Gaussian Splatting

3DGS is now part of V Ray 7's beta, paving the way for use in platforms like 3ds Max and Maya.

Michael Rubloff

Oct 1, 2024

Platforms

Chaos V-Ray 7 to support Gaussian Splatting

3DGS is now part of V Ray 7's beta, paving the way for use in platforms like 3ds Max and Maya.

Michael Rubloff

Oct 1, 2024

Platforms

Chaos V-Ray 7 to support Gaussian Splatting

3DGS is now part of V Ray 7's beta, paving the way for use in platforms like 3ds Max and Maya.

Michael Rubloff

Platforms

Kiri Engine Gaussian Splatting Blender Add-On

Industry standard platform, Blender is getting another big 3DGS boost. This time from Kiri Engine.

Michael Rubloff

Sep 30, 2024

Platforms

Kiri Engine Gaussian Splatting Blender Add-On

Industry standard platform, Blender is getting another big 3DGS boost. This time from Kiri Engine.

Michael Rubloff

Sep 30, 2024

Platforms

Kiri Engine Gaussian Splatting Blender Add-On

Industry standard platform, Blender is getting another big 3DGS boost. This time from Kiri Engine.

Michael Rubloff