Skip to content

Commit

Permalink
Move from gitee
Browse files Browse the repository at this point in the history
  • Loading branch information
D-Hank committed Jun 25, 2022
1 parent 83ec227 commit fb70e0e
Show file tree
Hide file tree
Showing 20,681 changed files with 75,478 additions and 0 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
__pycache__/**
99 changes: 99 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# PRS-Net

## Introduction
A pytorch implementation of PRS-Net, a research work on TVCG by Lin Gao et al.

Official link: http://geometrylearning.com/prs-net/

Author: D-Hank

Feel free to contact me at [[email protected]]([email protected]). To use my code, please cite the link of this repository.

## Dependencies

PCL 1.12.1: generate point cloud (poisson disk) {https://pointclouds.org/}

cuda_voxelizer: generate voxel (mesh split) {https://github.com/Forceflow/cuda_voxelizer}

Open3D: read pcd file in Python {http://www.open3d.org/}

Libigl: compute closest points on the mesh (barycentric coordinates) {https://github.com/libigl/libigl-python-bindings}

point_cloud_utils: read and write obj files {https://github.com/fwilliams/point-cloud-utils}

Mayavi / matplotlib: visualization {http://docs.enthought.com/mayavi/mayavi/}

**Note: ** The library cuda voxelizer should be localized for our model. We include the revised version in the directory `extern/`

## Directory

The project directory should be organized like this:

├── augment # dataset after augmentation

├── data # dataset after preprocess

├── shapenet # original shapenet data

​ ├── 02691156

​ └── ……

└── prsnet-repr # main working directory

​ ├── checkpoint # saved models

​ └── ……

## Running Tips

Change your working directory to `prsnet-repr`. About 3 days and 80 GB free space are required. You can set the default options in `settings.py`.

To run this project from the start, first run `python augment.py` to generate augmented data. It takes one day to run on CPU.

Then run `python preprocess.py` to generate voxel, point cloud and closest points. We use 4 processes to run simultaneously, which takes around 2 days with CUDA acceleration.

Finally use `python main.py` to run the main program. It takes 0.5 hour to train.

If you'd like to use the pre-trained model in `checkpoint/`, then set `CONTINUE_` in `settings.py` to be True and run `main.py` directly.

## Results

For different categories in test set, we've achieved great results.
Reflective plane (with coordinate axes on the left-buttom corner):

<img src="teaser/a02691156_829108f586f9d0ac7f5c403400264eea_0.gif" alt="a02691156_829108f586f9d0ac7f5c403400264eea_0" width=120px/><img src="teaser/a02691156_17874281e56ff0fbfca1faa43bb6bc17_0.gif" width=120px /><img src="teaser/a02691156_fb06b00775efdc8e21b85e5214b0d6a7_0.gif" width=120px/><img src="teaser/a02747177_8b071aca0c2cc07c81faebbdea6bd9be_0.gif" width=120px/><img src="teaser/a02828884_133d46d90aa8e1742b76566a81e7d67e_0.gif" width=120px/>
<img src="teaser/a02828884_cd052cd64a9f956428baa2ac864e8e40_0.gif" width=120px/><img src="teaser/a02876657_d3ed110edb3b8a4172639f272c5e060d_0.gif" width=120px/><img src="teaser/a02880940_a0ac0c76dbb4b7685430c0f7a585e679_0.gif" width=120px/><img src="teaser/a02958343_4aa7fc4a0c08be8c962283973ea6bbeb_0.gif" width=120px/><img src="teaser/a03046257_5437b68ddffc8f229e5629b793f22d35_0.gif" width=120px/>
<img src="teaser/a03624134_a683ed081504a35e4a9a3a0b87d50a92_0.gif" width=120px/><img src="teaser/a03691459_85bbc49aa67149c531baa3c8ee4148cd_0.gif" width=120px/><img src="teaser/a03691459_403649d8cf6b019d5c01f9a624be205a_0.gif" width=120px/><img src="teaser/a04090263_9397161352dec4498bfbe54b5d01550_0.gif" width=120px/><img src="teaser/a04225987_ac2b6924a60a7a87aa4f69d519551495_0.gif" width=120px/>
<img src="teaser/a04256520_3bde46b6d6fb84976193d9e76bb15876_0.gif" width=120px/><img src="teaser/a04256520_29bfdc7d14677c6b3d6d3c2fb78160fd_0.gif" width=120px/><img src="teaser/a04256520_79745b6df9447d3419abd93be2967664_0.gif" width=120px/><img src="teaser/a04256520_bdd7a0eb66e8884dad04591c8486ec0_0.gif" width=120px/><img src="teaser/a04256520_c983108db7fcfa3619fb4103277a6b93_0.gif" width=120px/>
<img src="teaser/a04379243_290df469e3338a67c3bd24f986301745_0.gif" width=120px/><img src="teaser/a04401088_927b3450c8976f3393078ad6013586e7_0.gif" width=120px/><img src="teaser/a04468005_e5d292b873af06b24c7ef8f59a6ea23a_0.gif" width=120px/><img src="teaser/a04530566_ac5dad64a080899bba2dc6b0ec935a93_0.gif" width=120px/><img src="teaser/a04530566_d271233ccca1e7ee23a3427fc25942e0_0.gif" width=120px/>

For generalized objects, the rotation axis:

<img src="teaser/a02828884_cd052cd64a9f956428baa2ac864e8e40_0_r.gif" width=120px/><img src="teaser/a02880940_a0ac0c76dbb4b7685430c0f7a585e679_0_r.gif" width=120px/><img src="teaser/a02933112_73c2405d760e35adf51f77a6d7299806_0_r.gif" width=120px/><img src="teaser/a03691459_23efeac8bd7132ffb96d0ef27244d1aa_0_r.gif" width=120px/><img src="teaser/a04379243_6af7f1e6035abb9570c2e04669f9304e_0_r.gif" width=120px/>

## Limitations

- Position of the rotation axes

Motivated by [YuzhuoChen99](https://github.com/YizhuoChen99/PRS-Net)'s implementation, the model can only predict rotation axes near the original point. Even for the already-normalized shapenet dataset, the rotational center is not always near the origin. Therefore the model performs not so well (sometimes disturbed).

The solution is to introduce a shift vector or use the generalized 4×4 rotation matrix.

- Problems with axis-angle representation

Basically, the network can learn to use tricks for better performance. That is, it can randomly pick three orthogonal axes and set the rotational angle to be 0 or 2π. Then both the distance and regularized loss will be relatively low. So sometimes the training rotation loss looks like:

<img src="teaser/rotloss.jpg" width=400px />

After training for a long time, the network can get lazy for rotation. But there seems no good way to solve this.

## Acknowledgement

Quaternion: tiny reimplementation of pytorch3D for Quaternion

Pairwise-Cosine: https://github.com/pytorch/pytorch/issues/11202

Reference: He Yue's implementation {https://github.com/hysssb/PRS_net}

Reference: official release {https://github.com/IGLICT/PRS-Net}
107 changes: 107 additions & 0 deletions augment.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
import os
import random
import numpy as np
import point_cloud_utils as pcu

from typing import Dict, List
from settings import *

# Do a random transformation on a model
def random_trans(origin_nodes: np.ndarray):
# origin_nodes: (N_nodes, 3)
direction = np.random.rand(3)
x, y, z = direction / (np.linalg.norm(direction) + 1e-12)
theta = 2 * np.pi * np.random.rand()
cos = np.cos(theta)
sin = np.sin(theta)
_1_cos_x = (1 - cos) * x
_1_cos_y = (1 - cos) * y
# Rodrigues formula
matrix = np.array([
[cos + _1_cos_x * x , _1_cos_x * y - sin * z, _1_cos_x * z + sin * y ],
[_1_cos_y * x + sin * z, cos + _1_cos_y * y , _1_cos_y * z - sin * x ],
[_1_cos_x * z - sin * y, _1_cos_y * z + sin * x, cos + (1 - cos) * z * z]
]).astype(np.float32)

# (N, 3) * (3, 3) -> (N, 3)
trans_nodes = np.matmul(origin_nodes, matrix.transpose(0, 1))

return trans_nodes

# Augment a category
def aug_category(class_path: str, _class: str, obj_list: List, count: Dict[str, int], mode: str):
for obj in obj_list:
model_dir = os.path.join(class_path, obj, "models")
model_path = os.path.join(model_dir, "model_normalized.obj")

# Skip bad models
if not os.path.isfile(model_path):
continue

# Read original obj model
old_v, f = pcu.load_mesh_vf(model_path)
# For each sample one, generate new model
for i in range(0, count[obj]):
# new data
new_dir = os.path.join(AUG_DIR, mode, "a" + _class, obj + "_" + str(i))
new_path = os.path.join(new_dir, "model_normalized.obj")

# Skip already processed
if os.path.isfile(new_path):
continue

# Make new dir
if not os.path.isdir(new_dir):
os.makedirs(new_dir)

# Save new
new_v = random_trans(old_v)
pcu.save_mesh_vf(new_path, new_v, f)


for _class in os.listdir(ORIGINAL_DATA_DIR):
class_path = os.path.join(ORIGINAL_DATA_DIR, _class)

# Skip taxonomy
if not os.path.isdir(class_path):
continue

# Skip processed classes
if (UNPROCESSED != None) and (not _class in UNPROCESSED):
continue

print("Entering class: ", _class)

# Read split file
train_obj = []
test_obj = []
with open(os.path.join(SPLIT_DIR, _class + "_train.txt"), mode = "r") as train_file:
for obj in train_file:
obj = obj.strip("\n")
train_obj.append(obj)

with open(os.path.join(SPLIT_DIR, _class + "_test.txt"), mode = "r") as test_file:
for obj in test_file:
obj = obj.strip("\n")
test_obj.append(obj)

# Rest of train objs
sample_objs = train_obj
random.shuffle(sample_objs)
sample_objs = sample_objs[0 : NUM_AUG % len(train_obj)]

# Count sampling times for each train obj
# Avoid opening a file multiple times
num_per_obj = NUM_AUG // len(train_obj)
train_count = {}
for sample in set(train_obj):
train_count[sample] = sample_objs.count(sample) + num_per_obj

aug_category(class_path, _class, train_obj, train_count, "train")

# Count for test set
test_count = {}
for sample in set(test_obj):
test_count[sample] = 1

aug_category(class_path, _class, test_obj, test_count, "test")
Loading

0 comments on commit fb70e0e

Please sign in to comment.