2'nd Workshop for Learning 3D with Multi-View Supervision (3DMV)

at CVPR 2024

Call for papers:   February 10th

Submission Deadline:   March 15th (11.59 pm Any Time on Earth)

Workshop Day:   June 17th, 2024

Location:   Seattle Convention Center, WA (room: TBD)

Second Workshop for Learning 3D with Multi-View Supervision @ CVPR 2024

Following the success of the first Workshop for Learning 3D with Multi-View Supervision held during CVPR 2023, we are excited to bring forth the second iteration of this workshop for CVPR 2024. With the growing interest and advancements in the domain, this year's workshop promises more depth, diverse topics, and inclusive participation. It would cover various topics that involve multi-view deep learning for core 3D understanding tasks (recognition, detection, segmentation) and methods that use posed or un-posed multi-view images for 3D reconstruction and generation. A set of new topics of interest will be added such as dynamic multi-view datasets and generative 4D models that leverage multi-view representation. The detailed topics covered in the workshop include the following:

  • Multi-View for 3D Understanding
  • Deep Multi-View Stereo
  • Multi-View for 3D Generation and Novel View Synthesis
  • Dynamic Multi-View Datasets and 4D Generative models
  • Submission TimeLine

  • Paper Submission start: February 10th
  • Paper submission deadline: March 15th (11.59 pm Any Time on Earth)
  • Review period: March 15th - April 1st
  • Decision to authors: April 1st
  • Camera-ready papers: April 7th
  • Call For Papers

    We are soliciting papers that use Multi-view deep learning to address problems in 3D Understanding and 3D Generation, including but not limited to the following topics:

  • Bird-Eye View for 3D Object Detection
  • multi-view fusion for 3D Object Detection
  • indoor/outdoor scenes segmentation
  • 3D Diffusions for 3D generation
  • Diffusions for 4D generation
  • 4D understanding
  • part object segmentation
  • language+3D
  • Medical 3D segmentation and analysis
  • 3D shape generation
  • Deep multi-view stereo
  • Inverse Graphics from multi-view images
  • indoor/outdoor scenes generation and reconstruction
  • Volumetric Multi-view representation for 3D generation and novel view synthesis
  • Nerfs and Gaussian Splatting
  • 3D shape classification
  • 3D shape retrieval
  • Paper Submission Guidelines

  • We accept both archival and non-archival paper submissions.
  • Archival submissions should be of max 8 pages (excluding references) on the aforementioned and related topics.
  • Non-archival submissions can be previously published works in major venues (in the last two years or at CVPR 2024) or based on new works (max 8 pages as well).
  • Archival Accepted papers will be included in the proceedings of CVPR 2024, while non-archival accpeted papers will not be included
  • Submitted manuscripts should follow the CVPR 2024 paper template (if they have not been published previously).
  • All submissions (except for previoulsly published) will be peer-reviewed under a double-blind policy (authors should not include names in submissions)
  • PDFs need to be submitted online through the link.
  • Accepted papers' authors will be notified to prepare camera-ready posters to be uploaded based on the schedule above.
  • Every accepted paper will have the opportunity to host a poster presentation at the workshop.
  • Some acccpeted papers will be selcted for oral presentations at the workshop.
  • There will be a `best poster award` announced during the workshop with a sponsored money prize.
  • Schedule (June 17th. 2024)

    Time Session Speakers Recordings
    {{tableData[currentCountry][0]}} Opening Remarks Abdullah Hamdi -
    {{tableData[currentCountry][1]}} 3D Scene Understanding from Images Matthias Niessner -
    {{tableData[currentCountry][2]}} Multi-view Generative Diffusion Models Ziwei Liu -
    {{tableData[currentCountry][3]}} Coffee Break
    {{tableData[currentCountry][4]}} TBD Deva Ramanan -
    {{tableData[currentCountry][5]}} Lunch Break
    {{tableData[currentCountry][6]}} Oral Sessions (5 Orals)
    {{tableData[currentCountry][7]}} Posters Session
    {{tableData[currentCountry][8]}} Coffee Break
    {{tableData[currentCountry][4]}} Collecting Real Multi-View Datasets David Novotny -
    {{tableData[currentCountry][4]}} TBD Andrea Tagliasacchi -
    {{tableData[currentCountry][6]}} Announcement & Panel Discussion All speakers and moderators -


    Ziwei Liu

    Nanyang Technological University

    David Novotny

    Meta AI Research

    Andrea Tagliasacchi

    Google, Simon Fraser University



    Abdullah Hamdi

    University of Oxford

    Chuanxia Zheng

    University of Oxford
    Contact: abdullah.hamdi@kaust.edu.sa
    CVPR 2024 Workshop ©2024