View a PDF of the paper titled Style-NeRF2NeRF: 3D Style Transfer From Style-Aligned Multi-View Images, by Haruo Fujiwara and 2 other authors
Abstract:We propose a simple yet effective pipeline for stylizing a 3D scene, harnessing the power of 2D image diffusion models. Given a NeRF model reconstructed from a set of multi-view images, we perform 3D style transfer by refining the source NeRF model using stylized images generated by a style-aligned image-to-image diffusion model. Given a target style prompt, we first generate perceptually similar multi-view images by leveraging a depth-conditioned diffusion model with an attention-sharing mechanism. Next, based on the stylized multi-view images, we propose to guide the style transfer process with the sliced Wasserstein loss based on the feature maps extracted from a pre-trained CNN model. Our pipeline consists of decoupled steps, allowing users to test various prompt ideas and preview the stylized 3D result before proceeding to the NeRF fine-tuning stage. We demonstrate that our method can transfer diverse artistic styles to real-world 3D scenes with competitive quality. Result videos are also available on our project page: this https URL
Submission history
From: Haruo Fujiwara [view email]
[v1]
Wed, 19 Jun 2024 09:36:18 UTC (17,460 KB)
[v2]
Mon, 24 Jun 2024 06:04:23 UTC (17,460 KB)
[v3]
Wed, 4 Sep 2024 06:32:00 UTC (22,603 KB)
Source link
lol