Follow these steps to set up a simple example:
git clone https://github.com/CMU-ILIM/Combined-Keypoints
cd Combined-Keypoints
Install required python libraries
virtualenv Combined-Keypoints -p python3.6
source Combined-Keypoints/bin/activate
pip3 install numpy cython
pip3 install -r requirements.txt
Compile required detector libraries
cd Detector/lib
sh make.sh
I use ImageNet pretrained weights from Caffe for the backbone networks.
Download them and put them into the {repo_root}/data/pretrained_model
.
You can the following command to download them all:
- extra required packages:
argparse_color_formater
,colorama
,requests
python tools/download_imagenet_weights.py
NOTE: Caffe pretrained weights have slightly better performance than Pytorch pretrained. Suggest to use Caffe pretrained models from the above link to reproduce the results. By the way, Detectron also use pretrained weights from Caffe.
If you want to use pytorch pre-trained models, please remember to transpose images from BGR to RGB, and also use the same data preprocessing (minus mean and normalize) as used in Pytorch pretrained model.
Download the trained Keypoint models for cars and persons and place them in the home folder
Run the following commands to run the detector on a video
sh test.sh 0 0 demo/
cd Detector
mkdir data
cd data
mkdir fifth
cd fifth
wget http://www.cs.cmu.edu/~ILIM/projects/IM/CarFusion/Dataset.zip
unzip Dataset.zip
mv Dataset train
cd ../../..
sh train.sh