- Updates
- Central Idea
- Motivation
- Limitation of existing works
- Pipeline
- Instructions for Code usage
- Citation
CLIP2Protect: Protecting Facial Privacy using Text-Guided Makeup via Adversarial Latent Search [CVPR 2023]
Fahad Shamshad,
Muzammal Naseer,
Karthik Nandakumar
MBZUAI, UAE.
- July-19 : Code released.
- June-19 : Code and demo release coming soon. Stay tuned!
We all love sharing photos online, but do you know big companies and even governments can use sneaky 🕵️♂️ face recognition software to track us? Our research takes this challenge head-on with a simple and creative idea 🌟: using carefully crafted makeup 💄 to outsmart the tracking software. The cherry on top? We're using everyday, easy-to-understand language 🗣️ to guide the makeup application, giving users much more flexibility! Our approach keeps your photos safe 🛡️ from unwanted trackers without making you look weird or having bizarre patches on your face, issues commonly seen with previous solutions.
- Malicious black-box Face recognition systems pose a serious threat to personal security/privacy of 5 billions people using social media.
- Unauthorized entities can use FR systems to track user activities by scraping face images from social media platforms.
- There is an urgent demand for effective privacy preservation methods.
- Recent noise-based facial privacy protection approaches result in artefacts.
- Patch-based privacy approaches provide low privacy protection and their large visible pattern compromises naturalness.
CLIP2Protect generates face images that look natural and real. But here's the special part: it also ensures a high level of privacy protection. This means you can keep sharing images without worrying about unwanted tracking. It consists of two stages.
- The latent code initialization stage reconstructs the given face image in the latent space by fine-tuning the generative model.
- The text-guided adversarial optimization stage utilizes user-defined makeup text prompts and identity-preserving regularization to guide the search for adversarial codes within the latent space to effectively protect the facial privacy.
- Get code
git clone https://github.com/fahadshamshad/Clip2Protect.git
- Build environment
cd Clip2Protect
# use anaconda to build environment
conda create -n clip2protect python=3.8
conda activate clip2protect
# install packages
pip install -r requirements.txt
-
Our solution relies on the Rosinality PyTorch implementation of StyleGAN2.
-
Download the pre-trained StyleGAN2 weights:
- Download the pre-trained StyleGAN2 weights from here.
- Place the weights in the 'pretrained_models' folder.
-
Download pretrained face recognition models and dataset instructions:
- To acquire pretrained face recognition models and dataset instructions, including target images, please refer to the AMT-GAN page here.
- Place the pretrained face recognition model in the
models
folder.
-
Acquire latent codes:
- We assume the latent codes are available in the
latents.pt
file. - You can acquire the latent codes of the face images to be protected using the encoder4editing (e4e) method available here.
- We assume the latent codes are available in the
-
Run the code:
- The core functionality is in
main.py
. - Provide the
latents.pt
file and the corresponding faces directory, named 'input_images'. - Generate the protected faces in the 'results' folder by running the following command:
python main.py --data_dir input_images --latent_path latents.pt --protected_face_dir results
- The core functionality is in
-
Generator finetuning and adversarial optimization stages:
- The generator finetuning is implemented in
pivot_tuning.py
. - The adversarial optimization is implemented in
adversarial_optimization.py
.
- The generator finetuning is implemented in
If you're using CLIP2Protect in your research or applications, please cite using this BibTeX:
@inproceedings{shamshad2023clip2protect,
title={CLIP2Protect: Protecting Facial Privacy Using Text-Guided Makeup via Adversarial Latent Search},
author={Shamshad, Fahad and Naseer, Muzammal and Nandakumar, Karthik},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={20595--20605},
year={2023}
}