Skip to content

Openpose from CMU implemented using Tensorflow with Custom Architecture for fast inference.

License

Notifications You must be signed in to change notification settings

aascode/tf-openpose

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

98 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

tf-openpose

'Openpose' for human pose estimation have been implemented using Tensorflow. It also provides several variants that have made some changes to the network structure for real-time processing on the CPU or low-power embedded devices.

You can even run this on your macbook with descent FPS!

Original Repo(Caffe) : https://github.com/CMU-Perceptual-Computing-Lab/openpose

CMU's Original Model
on Macbook Pro 15"
Mobilenet Variant
on Macbook Pro 15"
Mobilenet Variant
on Jetson TK2
cmu-model mb-model-macbook mb-model-tx2
~0.6 FPS ~4.2 FPS @ 368x368 ~10 FPS @ 368x368
2.8GHz Quad-core i7 2.8GHz Quad-core i7 Jetson TX2 Embedded Board

Implemented features are listed here : features

Install

Dependencies

You need dependencies below.

  • python3

  • tensorflow 1.3

  • opencv3

  • protobuf

  • python3-tk

Install

$ git clone https://www.github.com/ildoonet/tf-openpose
$ cd tf-openpose
$ pip3 install -r requirements.txt

Models

  • cmu

    • the model based VGG pretrained network which described in the original paper.
    • I converted Weights in Caffe format to use in tensorflow.
    • weight download
  • dsconv

    • Same architecture as the cmu version except for
      the depthwise separable convolution of mobilenet.
    • I trained it using 'transfer learning', but it provides not-enough speed and accuracy.
  • mobilenet

    • Based on the mobilenet paper, 12 convolutional layers are used as feature-extraction layers.
    • To improve on small person, minor modification on the architecture have been made.
    • Three models were learned according to network size parameters.
    • I published models which is not the best ones, but you can test them before you trained a model from the scratch.

Inference Time

Macbook Pro - 3.1GHz i5 Dual Core

Dataset Model Inference Time
Coco cmu 10.0s @ 368x368
Coco dsconv 1.10s @ 368x368
Coco mobilenet_accurate 0.40s @ 368x368
Coco mobilenet 0.24s @ 368x368
Coco mobilenet_fast 0.16s @ 368x368

Jetson TX2

On embedded GPU Board from Nvidia, Test results are as below.

Dataset Model Inference Time
Coco cmu OOM @ 368x368
5.5s @ 320x240
Coco mobilenet_accurate 0.18s @ 368x368
Coco mobilenet 0.10s @ 368x368
Coco mobilenet_fast 0.07s @ 368x368

CMU's original model can not be executed due to 'out of memory' on '368x368' size.

Demo

Test Inference

You can test the inference feature with a single image.

$ python3 inference.py --model=mobilenet --imgpath=...

Then you will see the screen as below with pafmap, heatmap, result and etc.

inferent_result

Realtime Webcam

$ python3 realtime_webcam.py --camera=0 --model=mobilenet --zoom=1.0

Then you will see the realtime webcam screen with estimated poses as below. This Realtime Result was recored on macbook pro 13" with 3.1Ghz Dual-Core CPU.

Training

See : etcs/training.md

References

OpenPose

[1] https://github.com/CMU-Perceptual-Computing-Lab/openpose

[2] Training Codes : https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation

[3] Custom Caffe by Openpose : https://github.com/CMU-Perceptual-Computing-Lab/caffe_train

[4] Keras Openpose : https://github.com/michalfaber/keras_Realtime_Multi-Person_Pose_Estimation

Mobilenet

[1] Original Paper : https://arxiv.org/abs/1704.04861

[2] Pretrained model : https://github.com/tensorflow/models/blob/master/slim/nets/mobilenet_v1.md

Libraries

[1] Tensorpack : https://github.com/ppwwyyxx/tensorpack

Tensorflow Tips

[1] Freeze graph : https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py

[2] Optimize graph : https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2

About

Openpose from CMU implemented using Tensorflow with Custom Architecture for fast inference.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.0%
  • Shell 1.0%