PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Self-Supervised Multimodal NeRF for Autonomous Driving

arXivSource

Gaurav Sharma, Ravi Kothari, Josef Schmid

cs.CV
|
Jun 24, 2025
43 views

One-line Summary

The paper presents a self-supervised NeRF framework for autonomous driving that efficiently learns from LiDAR and camera data without 3D labels, achieving superior performance on the KITTI-360 dataset.

Plain-language Overview

This research introduces a new framework for creating 3D models from both LiDAR and camera data, which is important for self-driving cars. The framework, called Novel View Synthesis Framework (NVSF), doesn't need labeled 3D data, making it easier to use in real-world applications. It uses smart techniques to focus on important parts of images and preserve detailed features from LiDAR data. Tests show that it performs better than existing methods on a standard dataset used for evaluating autonomous driving systems.

Technical Details