Official respository for Qian Huang, Minghao Hu, David J. Brady, Array Camera Image Fusion using Physics-Aware Transformers in Journal of Imaging Science and Technology, 2022.
The colab notebook of the experiment on wide-narrow field system has been uploaded to PAT/Inference_as_a_whole_dual_vision.ipynb.
A colab notebook to play the pretrained PAT with the local receptive field (similar to the receptive field of CNNs) has been uploaded to PAT/Inference_pat_local_receptive_field.ipynb.
Method 1: Open put-together.py in Blender 2.92.0, change paths to the local machine, and run the script to generate the dataset.
Method 2: Download our dataset here (powered by UA ReDATA)
Then run trainingDataSynthesis/test/gen_patches.py to generate patches.
under PAT/requirements.txt. The enviorment is exported from pytorch/nvidia/20.01 docker on PUMA nodes of UA HPC.
pytorch train.py --trainset_dir [path to your training patches] --validset_dir [path to your validation patches]
OR
pytorch train_4inputs.py --trainset_dir [path to your training patches] --validset_dir [path to your validation patches]
pytorch demo_test.py --model_dir log_2inputs
OR
pytorch demo_test_4inputs.py --model_dir log_4inputs
Use Inference_as_a_whole_pittsburgh.ipynb or Inference_as_a_whole.ipynb
Some code is borrowed from https://github.com/The-Learning-And-Vision-Atelier-LAVA/PASSRnet.