gDist: Efficient Distance Computation between 3D Meshes on GPU

by Peng Fang1 , Wei Wang1 , Ruofeng Tong1 , Hailong Li3 , and Min Tang1, 2, *

1 - Zhejiang University, China

2 - Zhejiang Sci-Tech University, China

3 - Shenzhen Poisson Software Co., Ltd., China

* - Corresponding Author

Abstract

Computing maximum/minimum distances between 3D meshes is crucial for various applications, i.e., robotics, CAD, VR/AR, etc. In this work, we introduce a highly parallel algorithm (gDist) optimized for Graphics Processing Units (GPUs), which is capable of computing the distance between two meshes with over 15 million triangles in less than 0.4 milliseconds. By testing on benchmarks with varying characteristics, the algorithm achieves remarkable speedups over prior CPU-based and GPU-based algorithms on a commodity GPU (NVIDIA GeForce RTX 4090). Notably, the algorithm consistently maintains high-speed performance, even in challenging scenarios that pose difficulties for prior algorithms.
 

 

Benchmarks: Our novel GPU-based distance computing algorithm achieves remarkable speedups over prior CPU-based and GPU-based algorithms on a commodity GPU (NVIDIA GeForce RTX 4090), by testing on benchmarks with varying characteristics.

Contents

Paper  (PDF 3.16 MB)

Video (95.5 MB)

Source Code (Coming soon) 

Peng Fan, Wei Wang, Ruofeng Tong, Hailong Li, and Min Tang. 2024. gDist: Efficient Distance Computation between 3D Meshes on GPU. In SIGGRAPH Asia 2024 Conference Papers (SA Conference Papers' 24), December 3–6, 2024, Tokyo, Japan. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/ 3680528.3687619

   @inproceedings{gdist24,
      author = {Fang, Peng and Wang, Wei and and Tong, Ruofeng and Li, Hailong and Tang, Min},
      title = {{gDist}: Efficient Distance Computation between 3D Meshes on GPU},
      booktitle = {Proceedings of SIGGRAPH Asia 2024},
      location = {Tokyo, Japan},
      doi = {https://doi.org/10.1145/3680528.3687619},
      pages = {1--11},
      month = {December}
      year = {2024},
  }

 

Related Links

CTSN: Predicting Cloth Deformation for Skeleton-based Characters with a Two-stream Skinning Network

D-Cloth: Skinning-based Cloth Dynamic Prediction with a Three-stage Network

N-Cloth: Predicting 3D Cloth Deformation with Mesh-Based Networks

P-Cloth: Interactive Cloth Simulation on Multi-GPU Systems using Dynamic Matrix Assembly and Pipelined Implicit Integrators

I-Cloth: Incremental Collision Handling for GPU-Based Interactive Cloth Simulation

PSCC: Parallel Self-Collision Culling with Spatial Hashing on GPUs

I-Cloth: API for fast and reliable cloth simulation with CUDA

Efficient BVH-based Collision Detection Scheme with Ordering and Restructuring

MCCD: Multi-Core Collision Detection between Deformable Models using Front-Based Decomposition

Interactive Continuous Collision Detection between Deformable Models using Connectivity-Based Culling

TightCCD: Efficient and Robust Continuous Collision Detection using Tight Error Bounds

Fast and Exact Continuous Collision Detection with Bernstein Sign Classification

A GPU-based Streaming Algorithm for High-Resolution Cloth Simulation

Continuous Penalty Forces

UNC dynamic model benchmark repository

Collision-Streams: Fast GPU-based Collision Detection for Deformable Models

Fast Continuous Collision Detection using Deforming Non-Penetration Filters

Fast Collision Detection for Deformable Models using Representative-Triangles

DeformCD: Collision Detection between Deforming Objects

Self-CCD: Continuous Collision Detection for Deforming Objects

Interactive Collision Detection between Deformable Models using Chromatic Decomposition

Fast Proximity Computation Among Deformable Models using Discrete Voronoi Diagrams

CULLIDE: Interactive Collision Detection between Complex Models using Graphics Hardware

RCULLIDE: Fast and Reliable Collision Culling using Graphics Processors

Quick-CULLIDE: Efficient Inter- and Intra-Object Collision Culling using Graphics Hardware

Collision Detection

UNC GAMMA Group

Acknowledgements

This research is supported in part by the Leading Goose R&D Program of Zhejiang under Grant No. 2024C01103.

 


tang_m@zju.edu.cn