NVIDIA GTC 2023 NeRF Schedule List

Michael Rubloff

Michael Rubloff

Mar 10, 2023

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
NVIDIA GTC 2023 NeRF
NVIDIA GTC 2023 NeRF

With NVIDIA's GTC fast approaching, it's important to plan out what sessions you're interested and can make. However, with so many presentations, it's easy to get lost. So here's the list of all the current NeRF presentations that are scheduled, along with the speaker, a link to join, and a summary of the talk.

It will be possible to join all the below sessions virtually, but make sure that you register using the links. It's actually pretty easy to make an NVIDIA account. You are able to sign in with Google, Facebook, Discord if you don't want to make one. There are no NeRF sessions on the 20th and begin on Tuesday, the 21st. Please note, it appears that NVIDIA uses Microsoft Teams for their streaming platform. Please also note that the times listed are in Pacific.

All of the sessions sound fascinating, but I am most excited for NeRFs-as-a-Service (NeRFaaS?).


Tuesday, March 21st

The Indefinable Moods of Artificial Intelligence [S51838]

Kathy Smith, Artist and Professor, School of Cinematic Arts, University of Southern California

Time: 11:00 AM - 11:50 AM PDT

Learn about the direct correlation of human dreams to the evolution of drawing, painting, and animation, and how AI reflects the collective unconscious, driving new forms of art creation, structure, and narrative forms. We'll look at how AI is reflecting the dream world — our hopes, fears, desires, biases, and human frailty. How can we better use AI for good? How can it enhance all humans to return to creative practice and not separate art from everyday life? NVIDIA RTX GPUs are widely used at the University of Southern California’s School of Cinematic Arts and the new Expanded Animation - XA Program focuses on animation for AI, virtual characters, and robotics. Through USC’s XA Program we’ll showcase how NVIDIA Omniverse, NVIDIA Canvas, GauGAN2, and NeRFs are being used to teach, experiment, and express dream imagery, inspiring new forms of image making and creative process.

3D by AI: Using Generative AI and NeRFs for Building Virtual Worlds [S52163]

Gavriel State, Senior Director, Simulation and AI, NVIDIA

Time: 12:00 PM - 12:50 PM PDT

Building virtual worlds such as digital twins of factories, warehouses, or city streets is an extremely involved and technical process for today’s technical artists and can take months to complete. NeRFs (Neural Radiance Fields) can accelerate this process by scanning the real world into our virtual worlds, and we can further speed things up by employing Generative AI techniques to create objects and materials from scratch. Join this session to see some of NVIDIA’s latest work in Generative AI and NeRFs for building virtual worlds in NVIDIA Omniverse.

On-Demand Neural Radiance Fields: View Synthesis and 3D Reconstruction for 3D Online Shopping Experiences [S51547]

Kyungwon Yun, CTO, RECON Labs, Inc.

Time: 7:00 PM - 7:25 PM PDT

Learn how the recent explosion of neural radiance fields (NeRF) will change the future of online shopping experiences. Powered by Instant Neural Graphics Primitives and Kaolin-Wisp, we present our recent journey on a NeRF-derived software-as-a-service application for 3D online shopping. We'll discuss the engineering challenges of making neural radiance fields into a real-world product. Some key details of NeRFs-as-a-Service will be provided, including (1) architecting on-demand NeRFs in the cloud, (2) challenges of view synthesis and 3D-reconstruction as a product, and (3) lessons learned from client/end-user reviews on 3D online shopping.

Wednesday, March 22nd

Building City-Scale Neural Radiance Fields for Autonomous Driving [S51770]

Piotr Sokolski, Staff ML Software Engineer, Wayve

Time: 9:00 AM - 9:25 AM PDT
We'll share our experience with building a pipeline for constructing neural radiance fields (NeRFs) at a city scale. Recent advancements in neural rendering techniques, such as NeRFs, enable the creation of data-driven simulations for robot perception and control. We use NeRFs to build interactive environments to test and train autonomous agents that control vehicles deployed on real roads. Join us to hear more about the challenges involved in scaling this technique to be able to create city-scale reconstructions: (1) splitting the problem into parallelizable sub-tasks, (2) automating quality control, and (3) overcoming the shortcomings of using NeRFs to simulate complicated driving scenarios.

3D Synthetic Data: Simplifying and Accelerating the Training of Vision AI Models for Industrial Workflows [S51663]

Bhumin Pathak, Senior Product Manager, Omniverse Replicator, NVIDIA

Time: 12:00 PM - 12:50 PM PDT
Today, companies are deploying vision AI-based applications to power industrial workflows such as detecting defects, improving worker safety, training autonomous robots, and much more. Training these AI models requires copious amounts of data that are challenging to collect and annotate manually, impacting the ability of the AI model to accurately capture the multitude of scenarios in a production environment. 3D synthetic data generated by computer algorithms can help overcome the data gap and speed up model training. We’ll show an end-to-end training to deployment example, starting with Omniverse Replicator, from generating 3D synthetic data to training the AI model with NVIDIA TAO and deploying it into production using NVIDIA DeepStream. We’ll announce the latest updates to Omniverse Replicator, including NeRF-based geometry creation and a whole host of other features that simplify data generation, accelerating AI model development.

Nerfstudio: A Modular Framework for Neural Radiance Field Development [S51842]

Angjoo Kanazawa, Assistant Professor, University of California at Berkeley

Time: 2:00 PM - 2:25 PM PDT

Neural radiance fields (NeRFs) are rapidly gaining popularity for their ability to create photorealistic 3D reconstructions in real-world settings, with recent advances driving interest from a wide variety of disciplines in academia and industry. However, due to the flux of papers, consolidating code has been a challenge, and few tools exist to easily run NeRFs on user-collected data. I'll introduce nerfstudio, an open-source Python framework we recently released to address these issues by consolidating NeRF research innovations and making NeRFs easier to use in real-world applications. I'll discuss its development and recent updates such as visual effects integrations and more.

3D by AI: Using Generative AI and NeRFs for Building Virtual Worlds [S52163a]

Gavriel State, Senior Director, Simulation and AI, NVIDIA

Time: 11:00 PM - 11:50 PM PDT

Building virtual worlds such as digital twins of factories, warehouses, or city streets is an extremely involved and technical process for today’s technical artists and can take months to complete. NeRFs (Neural Radiance Fields) can accelerate this process by scanning the real world into our virtual worlds, and we can further speed things up by employing Generative AI techniques to create objects and materials from scratch. Join this session to see some of NVIDIA’s latest work in Generative AI and NeRFs for building virtual worlds in NVIDIA Omniverse.

Thursday, March 23rd

3D Synthetic Data: Simplifying and Accelerating the Training of Vision AI Models for Industrial Workflows,with Q&A from EMEA Region [S51663a]

Bhumin Pathak, Senior Product Manager, Omniverse Replicator, NVIDIA

Time: 2:00 AM - 2:50 AM PDT

Today, companies are deploying vision AI-based applications to power industrial workflows such as detecting defects, improving worker safety, training autonomous robots, and much more. Training these AI models requires copious amounts of data that are challenging to collect and annotate manually, impacting the ability of the AI model to accurately capture the multitude of scenarios in a production environment. 3D synthetic data generated by computer algorithms can help overcome the data gap and speed up model training. We’ll show an end-to-end training to deployment example, starting with Omniverse Replicator, from generating 3D synthetic data to training the AI model with NVIDIA TAO and deploying it into production using NVIDIA DeepStream. We’ll announce the latest updates to Omniverse Replicator, including NeRF-based geometry creation and a whole host of other features that simplify data generation, accelerating AI model development.

Watch Party: 3D Synthetic Data: Simplifying and Accelerating the Training of Vision AI Models for Industrial Workflows [WP51663]

Bhumin Pathak, Senior Product Manager, Omniverse Replicator, NVIDIA

Time: 5:30 AM - 7:00 AM PDT

A GTC Session Watch Party is a replay of an original GTC talk hosted by our NVIDIA Team. This is an interactive session and we encourage you to join the discussion with any comments or questions. Please note that the original speakers of the talk listed below may not be in attendance.

Hosted by:

  • Andrea Pilzer, Solutions Architect, NVIDIA

  • Gianni Rosa Gallina, R&D Technical Lead, Deltatre

  • Clemente Giorio, R&D Senior Software Engineer, Deltatre

Today, companies are deploying vision AI-based applications to power industrial workflows such as detecting defects, improving worker safety, training autonomous robots, and much more. Training these AI models requires copious amounts of data that are challenging to collect and annotate manually, impacting the ability of the AI model to accurately capture the multitude of scenarios in a production environment. 3D synthetic data generated by computer algorithms can help overcome the data gap and speed up model training. We’ll show an end-to-end training to deployment example, starting with Omniverse Replicator, from generating 3D synthetic data to training the AI model with NVIDIA TAO and deploying it into production using NVIDIA DeepStream. We’ll announce the latest updates to Omniverse Replicator, including NeRF-based geometry creation and a whole host of other features that simplify data generation, accelerating AI model development.

A “Join Watch Party Now” link will appear below, 15 minutes before the session start time. Click on the link to launch Microsoft Teams on your computer or from your web browser.

Watch Party: 3D by AI: Using Generative AI and NeRFs for Building Virtual Worlds [WP52163]

Gavriel State, Senior Director, Simulation and AI, NVIDIA

7:00 AM - 9:00 AM PDT
A GTC Session Watch Party is a replay of an original GTC talk hosted by our NVIDIA Team. This is an interactive session and we encourage you to join the discussion with any comments or questions. Please note that the original speakers of the talk listed below may not be in attendance.

Hosted by:

  • Branislav Kisacanin, Sr. Architect, Computer Vision, NVIDIA

As the next generation of artist tools and apprentices, AI will enable us to build 3D virtual worlds bigger, faster, and easier than ever before. Join this session to see NVIDIA’s latest work in generative AI models for creating 3D content and scenes, and see how these tools and research can help 3D artists in their workflows.

A “Join Watch Party Now” link will appear below, 15 minutes before the session start time. Click on the link to launch Microsoft Teams on your computer or from your web browser.

Friday, March 24th

Watch Party: 3D 合成数据:简化并加速视觉 AI 模型生产流程 [WP51663a]
Bhumin Pathak, Senior Product Manager, Omniverse Replicator, NVIDIA

Time: 1:00 AM - 2:30 AM PDT

GTC Watch Party (在线观看派对)是由 NVIDIA 本地专家主持,以中文讲解,带领参与者同步观看某一精选演讲并解读和实时答疑的会议形式。欢迎您加入和互动!(请注意:由于时区差异,原演讲嘉宾可能不会参加在线观看派对)

主持人:

Ken He, Developer Community Manager, NVIDIA

如今,一些公司正在部署基于视觉 AI 的应用程序来支持工业工作流程,例如检测缺陷、提高工人安全、训练自主机器人等等。训练这些 AI 模型需要大量数据,而这些数据很难手动收集和注释,影响 AI 模型准确捕获生产环境中大量场景的能力。

由计算机算法生成的 3D 合成数据可以帮助克服数据差距并加快模型训练。我们将展示从 Omniverse Replicator 开始的端到端训练到部署示例,从生成 3D 合成数据到使用 NVIDIA TAO 训练 AI 模型,再到使用 NVIDIA DeepStream 将其部署到生产中。我们将宣布 Omniverse Replicator 的最新更新,包括基于 NeRF 的几何创建和大量其他功能,这些功能可简化数据生成,加速 AI 模型开发。

在本会议开始前 15 分钟,下方将出现 Join Watch Party Now (立即加入在线观看派对)按钮,点击后即可选择通过 Microsoft Teams 的 APP 端或网页版加入在线观看派对。

Featured

Recents

Featured

Platforms

Kiri Engine Blender 2.0 Released

3DGS Render by KIRI Engine 2.0 introduces improved ease of use and performance optimization for Blender 4.2.

Michael Rubloff

Nov 22, 2024

Platforms

Kiri Engine Blender 2.0 Released

3DGS Render by KIRI Engine 2.0 introduces improved ease of use and performance optimization for Blender 4.2.

Michael Rubloff

Nov 22, 2024

Platforms

Kiri Engine Blender 2.0 Released

3DGS Render by KIRI Engine 2.0 introduces improved ease of use and performance optimization for Blender 4.2.

Michael Rubloff

Platforms

StorySplat Continues to Evolve: V1.3 Brings Major Update

Several new features, additions, and bugs have been fixed in the educational 3DGS platform.

Michael Rubloff

Nov 21, 2024

Platforms

StorySplat Continues to Evolve: V1.3 Brings Major Update

Several new features, additions, and bugs have been fixed in the educational 3DGS platform.

Michael Rubloff

Nov 21, 2024

Platforms

StorySplat Continues to Evolve: V1.3 Brings Major Update

Several new features, additions, and bugs have been fixed in the educational 3DGS platform.

Michael Rubloff

Research

3DGS to Dense Point Cloud PLY

This GitHub repository is making it easy to convert 3DGS to dense point clouds.

Michael Rubloff

Nov 21, 2024

Research

3DGS to Dense Point Cloud PLY

This GitHub repository is making it easy to convert 3DGS to dense point clouds.

Michael Rubloff

Nov 21, 2024

Research

3DGS to Dense Point Cloud PLY

This GitHub repository is making it easy to convert 3DGS to dense point clouds.

Michael Rubloff

Platforms

RealityCapture 1.5 Released with Radiance Field and COLMAP Export

Transforms.json and COLMAP export have arrived for RealityCapture.

Michael Rubloff

Nov 20, 2024

Platforms

RealityCapture 1.5 Released with Radiance Field and COLMAP Export

Transforms.json and COLMAP export have arrived for RealityCapture.

Michael Rubloff

Nov 20, 2024

Platforms

RealityCapture 1.5 Released with Radiance Field and COLMAP Export

Transforms.json and COLMAP export have arrived for RealityCapture.

Michael Rubloff