LUVLi Face Alignment: Estimating Landmarks' Location, Uncertainty, and Visibility Likelihood, CVPR 2020
[slides], [1min_talk], [supp],[demo]
UGLLI Face Alignment: Estimating Uncertainty with Gaussian Log-Likelihood Loss, ICCV Workshops on Statistical Deep Learning in Computer Vision 2019
[slides], [poster], [news], [Best Oral Presentation Award]
This repository is based on the DU-Net code.
Please cite the following papers if you find this repository useful:
@inproceedings{kumar2020luvli,
title={LUVLi Face Alignment: Estimating Landmarks' Location, Uncertainty, and Visibility Likelihood},
author={Kumar, Abhinav and Marks, Tim K. and Mou, Wenxuan and Wang, Ye and Jones, Michael and Cherian, Anoop and Koike-Akino, Toshiaki and Liu, Xiaoming and Feng, Chen},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2020}
}
@inproceedings{kumar2019uglli,
title={UGLLI Face Alignment: Estimating Uncertainty with Gaussian Log-Likelihood Loss},
author={Kumar, Abhinav and Marks, Tim K and Mou, Wenxuan and Feng, Chen and Liu, Xiaoming},
booktitle={ICCV Workshops on Statistical Deep Learning in Computer Vision},
year={2019}
}
- Python 2.7
- Pytorch 0.3.0 or 0.3.1
- Torchvision 0.2.0
- Cuda 8.0
- Ubuntu 18.04
Other platforms have not been tested.
Clone the repo first. Unless otherwise stated the scripts and instructions assume working directory is the project root.
There are two ways to run this repo - through Conda or through pip.
Install conda first:
wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh
bash Anaconda3-2020.02-Linux-x86_64.sh
source ~/.bashrc
conda list
Then install the desired packages:
conda env create --file conda_py27.yml
conda activate py27
virtualenv --python=/usr/bin/python2.7 py27
source py27/bin/activate
pip install torch==0.3.1 -f https://download.pytorch.org/whl/cu80/stable
pip install torchvision==0.2.0
pip install sklearn opencv-python
sudo apt-get install libfreetype6-dev
sudo apt-get install build-essential autoconf libtool pkg-config python-opengl python-imaging python-pyrex python-pyside.qtopengl idle-python2.7 qt4-dev-tools qt4-designer libqtgui4 libqtcore4 libqt4-xml libqt4-test libqt4-script libqt4-network libqt4-dbus python-qt4 python-qt4-gl libgle3 python-dev
pip install configparser seaborn
We need to make some extra directories to store the datasets and the models
cd $PROJECT_DIR
# This directory stores the models of the training in its sub-directories
mkdir abhinav_model_dir
# For storing train datasets
mkdir -p bigdata1/zt53/data/face
# For storing csv
mkdir dataset_csv
We also use the DU-Net 300-W Split 1 heatmap model for training. Please contact Zhiqiang Tang to get this.
face-layer-num-8-order-1-model-best.pth.tar
- Base 300-W Split1 face model from which everything is finetuned
Now copy this file:
cp face-layer-num-8-order-1-model-best.pth.tar $PROJECT_DIR
The following Face datasets are used for training and testing -
- AFW
- HELEN
- IBUG
- LFPW
- 300W Cropped indoor and outdoor - available in 4 parts
- Menpo
- COFW-Color
- Multi-PIE
- 300W_LP
- AFLW-19 Drop an email to [email protected] to get the dataset mailed to you
- WFLW-98
- MERL-RAV (We refer to MERL-RAV as AFLW_ours in this repo)
The Splits are made as follows
Splits | Name | Datasets |
---|---|---|
1 | 300-W Split 1 | 1-4 |
2 | 300-W Split 2 | 1-9 |
3 | AFLW-19 | 10 |
4 | WFLW | 11 |
5 | MERL-RAV (AFLW_ours) | 12 |
Extract and move all the datasets to the bigdata1/zt53/data/face
directory. Follow the MERL-RAV dataset instructions to get the merl_rav_organized
directory.
Next download the HR-Net processed annotations of AFLW and WFLW dataset from one-drive. Extract and move them to the dataset_csv
directory.
The directory structure should look like this:
./FaceAlignmentUncertainty/
|--- abhinav_model_dir/
|
|--- bigdata1/
| |---zt53/
| |---data/
| |---face/
| |---300W
| | |--- 01_Indoor/
| | |--- 02_Outdoor/
| |---300W_LP/
| |---aflw/
| | |---flickr/
| | |---0/
| | |---1/
| | |---2/
| |---afw/
| |---COFW_color/
| |---COFW_test_color.mat
| |---helen/
| |---ibug/
| |---lfpw/
| |---menpo/
| |---merl_rav_organized/
| |---Multi-PIE_original_data/
| |---wflw/
|
|--- Bounding\ Boxes/
|--- data/
|--- dataset/
|
|--- dataset_csv/
| |---aflw/
| | |---face_landmarks_aflw_test.csv
| | |---face_landmarks_aflw_test_frontal.csv
| | |---face_landmarks_aflw_train.csv
| |---wflw/
| |---face_landmarks_wflw_test.csv
| |---face_landmarks_wflw_test_blur.csv
| |---face_landmarks_wflw_test_expression.csv
| |---face_landmarks_wflw_test_illumination.csv
| |---face_landmarks_wflw_test_largepose.csv
| |---face_landmarks_wflw_test_makeup.csv
| |---face_landmarks_wflw_test_occlusion.csv
| |---face_landmarks_wflw_train.csv
|
|--- images/
|--- models/
|--- options/
|--- plot/
|--- pylib/
|--- splits_prep/
|--- test/
|--- utils/
|
|--- face-layer-num-8-order-1-model-best.pth.tar
| ...
Next type the following:
chmod +x *.sh
./scripts_dataset_splits_preparation.sh
./scripts_training.sh
Split | Directory | LUVLi | UGLLI |
---|---|---|---|
1 | run_108 | lr-0.00002-49.pth.tar | - |
2 | run_109 | lr-0.00002-49.pth.tar | - |
3 | run_507 | lr-0.00002-49.pth.tar | - |
4 | run_1005 | lr-0.00002-49.pth.tar | - |
5 | run_5004 | lr-0.00002-49.pth.tar | - |
1 | run_924 | - | lr-0.00002-39.pth.tar |
2 | run_940 | - | lr-0.00002-39.pth.tar |
Copy the pre-trained models to the abhinav_model_dir
first. The directory structure should look like this:
./FaceAlignmentUncertainty/
|--- abhinav_model_dir/
| |--- run_108
| |--- run_109
| |--- run_507
| |--- run_1005
| |--- run_5004
| ...
Next type the following:
./scripts_evaluation.sh
In case you want to get our qualitative plots and also the transformed figures, type:
python plot/show_300W_images_overlaid_with_uncertainties.py --exp_id abhinav_model_dir/run_109_evaluate/ --laplacian
python plot/plot_uncertainties_in_transformed_space.py -i run_109_evaluate/300W_test --laplacian
python plot/plot_residual_covariance_vs_predicted_covariance.py -i run_109_evaluate --laplacian
python plot/plot_histogram_smallest_eigen_value.py -i run_109_evaluate --laplacian
Options | Command |
---|---|
UGLLI | Default |
LUVLi | --laplacian --use_visibility |
Post processing by ReLU | --pp "relu" |
Aug scheme of Bulat et al, ICCV 2017 | --bulat_aug |
Use slurm | --slurm |
Images of splits are assumed to be in different directories with images and landmarks groundtruth of the same name with pts/mat extension. The bounding box ground truth for the first four face datasets is a mat file. The bounding boxes for other datasets is calculated by adding 5% noise to the tightest bounding box.
Go to the splits_prep
directory and open config.txt
.
input_folder_path = ./bigdata1/zt53/data/face/ #the base path of all the images in the folder
annotations_path = ./Bounding Boxes/ #the bounding box groundtruths are in this folder
num_keypoints = 68 # assumed to be constant for a particular split
train_datasets_names = lfpw, helen, afw #train datasets name
train_folders = lfpw/trainset, helen/trainset, afw #folders path relative to input_folder_path
train_annotations = bounding_boxes_lfpw_trainset.mat, bounding_boxes_helen_trainset.mat, bounding_boxes_afw.mat #paths relative to annotations_path
val_datasets_names = lfpw, helen, ibug # val datasets name
val_folders = lfpw/testset, helen/testset, ibug #folders path relative to input_folder_path
val_annotations = bounding_boxes_lfpw_testset.mat, bounding_boxes_helen_testset.mat, bounding_boxes_ibug.mat #paths relative to annotations_path
output_folder = ./dataset # folder in which JSONs is to be stored
output_prefix = normal_ # prefix of the JSONs
We have already placed the bounding box initializations and the annotations of the train images of 300W dataset in the Bounding Box
directory. In case, you are wondering the source of these annotations, you too can downloaded this from here.
python splits_prep/get_jsons_from_config.py -i splits_prep/config.txt
python splits_prep/get_jsons_from_config.py -i splits_prep/config_split2.txt
To get the JSONs for Menpo and Multi-PIE, type
python splits_prep/get_jsons_from_config.py -i splits_prep/config_menpo.txt
python splits_prep/get_jsons_from_config.py -i splits_prep/config_multi_pie.txt
Feel free to drop an email to this address -
[email protected]