Agriculture-Vision Dataset, Challenge and Workshop (CVPR 2020)
CVPR paper on Agriculture-Vision
@article{chiu2020agriculture,
title={Agriculture-Vision: A Large Aerial Image Database for Agricultural Pattern Analysis},
author={Chiu, Mang Tik and Xu, Xingqian and Wei, Yunchao and Huang, Zilong and Schwing, Alexander and Brunner, Robert and Khachatrian, Hrant and Karapetyan, Hovnatan and Dozier, Ivan and Rose, Greg and others},
journal={arXiv preprint arXiv:2001.01306},
year={2020}
}
CVPR Workshop paper on the 1st Agriculture-Vision Challenge (challenge dataset, methods and results)
@article{chiu20201st,
title={The 1st Agriculture-Vision Challenge: Methods and Results},
author={Chiu, Mang Tik and Xu, Xingqian and Wang, Kai and Hobbs, Jennifer and Hovakimyan, Naira and Huang, Thomas S. and Shi, Honghui and others},
journal={arXiv preprint arXiv:2004.09754},
year={2020}
}
The dataset used in this challenge is a subset of the Agriculture-Vision dataset. The challenge dataset contains 21,061 aerial farmland images captured throughout 2019 across the US. Each image consists of four 512x512 color channels, which are RGB and Near Infra-red (NIR). Each image also has a boundary map and a mask. The boundary map indicates the region of the farmland, and the mask indicates valid pixels in the image. Regions outside of either the boundary map or the mask are not evaluated.
This dataset contains six types of annotations: Cloud shadow, Double plant, Planter skip, Standing Water, Waterway and Weed cluster. These types of field anomalies have great impacts on the potential yield of farmlands, therefore it is extremely important to accurately locate them. In the Agriculture-Vision dataset, these six patterns are stored separately as binary masks due to potential overlaps between patterns. Users are free to decide how to use these annotations.
Each field image has a file name in the format of (field id)_(x1)-(y1)-(x2)-(y2).(jpg/png). Each field id uniquely identifies the farmland that the image is cropped from, and (x1, y1, x2, y2) is a 4-tuple indicating the position in which the image is cropped. Please refer to our paper for more details regarding how we construct the dataset.
The challenge dataset contains images, boundaries and masks for train, val and test set. It also contains labels for the train and val set only. The dataset .tar.gz file is around 4.4 GB. Please contact us to get access.
We use mean Intersection-over-Union (mIoU) as our main quantitative evaluation metric, which is one of the most commonly used measures in semantic segmentation datasets. The mIoU is computed as:
Where c is the number of annotation types (c = 7 in our dataset, with 6 patterns + background), Pc and Tc are the predicted mask and ground truth mask of class c respectively.
Since our annotations may overlap, we modify the canonical mIoU metric to accommodate this property. For pixels with multiple labels, a prediction of either label will be counted as a correct pixel classification for that label, and a prediction that does not contain any ground truth labels will be counted as an incorrect classification for all ground truth labels.
Concretely, we construct the confusion matrix Mc×c with the following rules:
For each prediction x and label set Y:
- If x ⊆ Y, then My,y = My,y + 1 for each y in Y
- Otherwise, Mx,y = Mx,y + 1 for each y in Y
The mIoU is finally computed by (true_positive) / (prediction + target - true_positive), averaged across all classes.
Update:
- The 1st Agriculture-Vision Prize Challenge has ended on April 20, 2020 (10:00 AM PDT). The final results as of the end date of the challenge can be found in these links (png, csv).
- The Codalab evaluation server will remain open to encourage further research in Agriculture-Vision. No registration is required.
We are now hosting our evaluation/competition on Codalab. The competition page can be found here. Each participating team is required to register for the challenge. To register your team, fill out the registration form here, then register on the competition page.
*Make sure your Codalab account email matches one of the member emails in the registration form. Each team can only register once.
All registered teams can evaluate their results on Codalab and publish their results on the leaderboard. The submission file should be a compressed .zip file that contains all prediction images. All prediction images should be in png format and the file names and image sizes should match the input images exactly. The prediction images will be converted to a 2D numpy array with the following code:
numpy.array(PIL.Image.open(‘field-id_x1-y1-x2-y2.png’))
In the loaded numpy array, only 0-6 integer labels are allowed, and they represent the annotations in the following way:
0 - background
1 - cloud_shadow
2 - double_plant
3 - planter_skip
4 - standing_water
5 - waterway
6 - weed_cluster
This label order will be strictly followed during evaluation.
All teams can have 2 submissions per day and 20 submissions in total.
We hosted the 1st Agriculture-Vision Prize Challenge with the following terms:
- Model size will be limited below 150M parameters in total.
- The mIoU derived from the "results/" folder in the final submission should match the mIoU on the leaderboard.
- Predictions in "results/" in the final submission can be reproduced with the resources in "code/" and "models/".
- The training process of the method can be reproduced and the retrained model should have a similar performance.
- The test set is off-limits in training.
- For fairness, teams need to specify what public datasets are used for training/pre-training their models in their challenge_ report.pdf. Results that are generated from models using private datasets, and results without such details will be excluded from prize evaluation. (Results using private datasets can still be included in the report.)
Please refer to our workshop summary paper for more details regarding notable methods. Selected teams & methods:
Team Hyunseong: Hyunseong Park, Junhee Kim, Sungho Kim (Agency for Defense Development, South Korea)
- Method: Residual DenseNet with Expert Network for Semantic Segmentation
Team SCG Vision: Qinghui Liu, Michael C. Kampffmeyer, Robert Jenssen, Arnt B. Salberg (Norwegian Computing Center, UiT The Arctic University of Norway)
Team AGR: Alexandre Barbosa, Rodrigo Trevisan (University of Illinois at Urbana Champaign)
- Method: Techniques for Improving Aerial Agricultural Image Semantic Segmentation
Team TJU: Bingchen Zhao, Shaozuo Yu, Siwei Yang, Yin Wang (Tongji University)
- Method: Reducing the feature divergence of RGB and near-infrared images using Switchable Normalization
Team Haossr: Hao Sheng, Xiao Chen, Jingyi Su, Ram Rajagopal, Andrew Ng (Stanford University, Chegg, Inc.)
- Method: Effective Data Fusion with Generalized Vegetation Index: Evidence from Land Cover Segmentation in Agriculture
Team CNUPR_TH2L: Van Thong Huynh, Soo-Hyung Kim, In-Seop Na (Chonnam National University, Chosun University)
- Method: Multi-spectra Attention in Aerial Agricultural Segmentation
Team TeamTiger: Ujjwal Baid, Shubham Innani, Prasad Dutande, Bhakti Baheti, Sanjay Talbar (SGGS Institute of Engineering and Technology)
- Method: Eff-PN: A Novel Deep Learning Approach for Anomaly Pattern Segmentation in Aerial Agricultural Images
Team | mIoU (%) | Background | Cloud shadow | Double plant | Planter skip | Standing water | Waterway | Weed cluster |
---|---|---|---|---|---|---|---|---|
Hyunseong | 63.9 | 80.6 | 56.0 | 57.9 | 57.5 | 75.0 | 63.7 | 56.9 |
seungjae | 62.2 | 79.3 | 44.4 | 60.4 | 65.9 | 76.9 | 55.4 | 53.2 |
yjl912.2 | 61.5 | 80.1 | 53.7 | 46.1 | 48.6 | 76.8 | 71.5 | 53.6 |
ddcm | 60.8 | 80.5 | 51.0 | 58.6 | 49.8 | 72.0 | 59.8 | 53.8 |
RodrigoTrevisan | 60.5 | 80.2 | 43.8 | 57.5 | 51.6 | 75.3 | 66.2 | 49.2 |
SYDu | 59.5 | 81.3 | 41.6 | 50.3 | 43.4 | 73.2 | 71.7 | 55.2 |
agri | 59.2 | 78.2 | 55.8 | 42.9 | 42.0 | 77.5 | 64.7 | 53.2 |
Tennant | 57.4 | 79.9 | 36.6 | 54.8 | 41.4 | 69.8 | 66.9 | 52.0 |
celery03.0 | 55.4 | 79.1 | 38.9 | 43.3 | 41.2 | 73.0 | 61.5 | 50.5 |
stevenwudi | 55.0 | 77.4 | 42.0 | 54.4 | 20.1 | 69.5 | 67.7 | 53.8 |
PAII | 55.0 | 79.9 | 38.6 | 47.6 | 26.2 | 74.6 | 62.1 | 55.7 |
agrichallenge1.2 | 54.6 | 80.9 | 50.9 | 39.3 | 29.2 | 73.4 | 57.8 | 50.5 |
hui | 54.0 | 80.2 | 41.6 | 46.4 | 20.8 | 72.8 | 64.8 | 51.4 |
shenchen61.6 | 53.7 | 79.4 | 36.7 | 56.3 | 21.6 | 67.0 | 61.8 | 52.8 |
NTU | 53.6 | 79.8 | 41.4 | 49.4 | 13.5 | 73.3 | 61.8 | 56.0 |
tpys | 53.0 | 81.1 | 50.5 | 37.1 | 25.9 | 67.4 | 58.7 | 50.1 |
Simple | 52.7 | 80.2 | 40.0 | 45.2 | 24.6 | 70.9 | 57.6 | 50.4 |
Ursus | 52.3 | 78.9 | 36.3 | 37.8 | 34.4 | 69.3 | 57.1 | 52.3 |
liepieshov | 52.1 | 77.2 | 40.2 | 46.0 | 16.0 | 71.3 | 62.9 | 51.1 |
Lunhao | 49.4 | 79.5 | 40.4 | 38.8 | 10.5 | 69.4 | 58.3 | 49.1 |
tetelias-mipt | 49.2 | 80.4 | 37.8 | 34.8 | 04.6 | 70.6 | 62.5 | 53.8 |
Dataloader | 48.9 | 79.1 | 42.0 | 35.8 | 09.1 | 68.7 | 56.7 | 51.3 |
Hakjin | 46.4 | 78.6 | 32.0 | 38.3 | 01.8 | 66.2 | 58.0 | 49.9 |
JianyuTANG | 44.6 | 78.1 | 37.9 | 31.8 | 15.4 | 47.3 | 54.8 | 46.9 |
haossr | 43.9 | 79.2 | 21.4 | 28.1 | 02.7 | 67.5 | 56.4 | 52.3 |
rpartsey | 41.5 | 72.5 | 21.6 | 36.2 | 09.1 | 59.7 | 40.7 | 50.6 |
baidujjwal | 40.8 | 75.2 | 26.1 | 40.1 | 09.9 | 48.0 | 37.1 | 49.5 |
Chaturlal | 40.7 | 77.7 | 23.0 | 20.4 | 05.0 | 55.0 | 51.0 | 52.9 |
Sciforce | 40.2 | 80.5 | 29.6 | 24.4 | 0.0 | 41.2 | 55.9 | 50.0 |
MustafaA | 40.1 | 76.5 | 34.4 | 25.6 | 11.1 | 46.0 | 36.5 | 50.3 |
HaotianYan | 36.8 | 77.1 | 21.9 | 25.1 | 13.7 | 57.5 | 24.3 | 37.9 |
gro | 36.3 | 76.4 | 37.5 | 08.4 | 0.0 | 60.3 | 29.7 | 41.8 |
oscmansan | 35.5 | 71.6 | 29.6 | 03.0 | 0.0 | 52.4 | 46.2 | 45.9 |
ThorstenC | 33.6 | 72.3 | 22.3 | 10.0 | 02.0 | 40.8 | 40.1 | 47.8 |
ZHwang | 33.5 | 76.5 | 32.4 | 12.9 | 0.0 | 57.2 | 15.9 | 39.9 |
fayzur2.0 | 22.1 | 65.4 | 21.8 | 02.2 | 00.2 | 23.3 | 13.4 | 28.7 |
gaslen.2 | 21.5 | 71.0 | 03.3 | 17.9 | 00.8 | 10.2 | 06.9 | 40.1 |
dvkhandelwal | 16.3 | 71.5 | 0.0 | 0.0 | 0.0 | 42.6 | 0.0 | 0.0 |
ajeetsinghiitd | 10.3 | 56.9 | 00.2 | 00.4 | 0.0 | 0.0 | 00.1 | 14.5 |
- We accept papers related to Agriculture-Vision with a rigorous peer-review process with program committee members from multiple research communities including computer vision, machine learning, image processing, remote sensing, agriculture etc.
- All accepted paper will be published in IEEE CVPR 2020 Workshop Proceedings.
- Several top accepted papers will be given oral presentation opportunity at our workshop, the rest accepted papers will be given poster presentation slots during the workshop.
The Workshop will be open to the broader community addressing various technical and application aspects of challenges and opportunities in computer vision research for agriculture. We aim to provide a venue to both show current relevant efforts in interdisciplinary areas between computer vision and agriculture, and to encourage further research and conversations within the computer vision community to tackle impactful agriculture-vision problems. We will solicit papers from the following wide range of topics:
- Computer vision research for agricultural imagery in general
- Resources and benchmarks for agricultural imagery based pattern analysis
- Farmland pattern classification and segmentation from LSHR aerial imagery
- Efficient data sampling methods for effective training on agricultural imageries
- Effective data fusion of multi-spectral image data such as RGB and Near-Infrared (RGB-NIR)
- Robust and stable pattern recognition on sparse and imbalanced annotations
- Other novel vision applications in agriculture
-
Multi-view Self-Constructing Graph Convolutional Networks with Adaptive Class Weighting Loss for Semantic Segmentation,
Qinghui Liu, Michael C. Kampffmeyer, Robert Jenssen, Arnt B. Salberg (Norwegian Computing Center, UiT The Arctic University of Norway) -
Reducing the feature divergence of RGB and near-infrared images using Switchable Normalization,
Siwei Yang, Shaozuo Yu, Bingchen Zhao, Yin Wang (Tongji University) -
Finding Berries: Segmentation and Counting of Cranberries using Point Supervision and Shape Priors,
Peri Akiva, Kristin Dana, Peter Oudemans, Michael Mars (Rutgers University) -
Leaf Spot Attention Network for Apple Leaf Disease Identification,
Hee-Jin Yu, Chang-Hwan Son (Kunsan National University) -
Visual 3D Reconstruction and Dynamic Simulation of Fruit Trees for Robotic Manipulation,
Francisco J Yandun, Abhisesh Silwal; George Kantor (Carnegie Mellon University) -
Cross-Regional Oil Palm Tree Detection,
Wenzhao Wu, Juepeng Zheng, Haohuan Fu, Weijia Li, Le Yu (Tsinghua University, The Chinese University of Hong Kong) -
Multi-Stream CNN for Spatial Resource Allocation: a Crop Management Application,
Alexandre Barbosa, Thiago Marinho, Nicolas Martin, Naira Hovakimyan (University of Illinois at Urbana-Champaign) -
Effective Data Fusion with Generalized Vegetation Index: Evidence from Land Cover Segmentation in Agriculture,
Hao Sheng, Xiao Chen, Jingyi Su, Ram Rajagopal, Andrew Ng (Stanford University) -
Deep Transfer Learning For Plant Center Localization,
Enyu Cai, Sriram Baireddy, Changye Yang, Melba Crawford, Edward Delp (Purdue University) -
Segmentation and detection from organised 3D point clouds: a case study in broccoli head detection,
Justin Le Louëdec, Hector A. Montes, Tom Duckett, Grzegorz Cielniak (University of Lincoln) -
Deep Learning based Corn Kernel Classification,
Henry Velesaca, Raúl Mira, Patricia L Suarez, Christian Larrea, Angel Sappa (ESPOL, Computer Vision Center, Spain) -
Improving In-field Cassava Whitefly Pest Surveillance with Machine Learning,
Jeremy Tusubira, Solomon nsumba, Flavia Ninsiima, Benjamin Akera, Guy Acellam, Joyce Nakatumba-Nabende, Ernest Mwebaze, John A Quinn, Tonny Oyana (Makerere University, Google) -
Weakly Supervised Learning Guided by Activation Mapping Applied to a Novel Citrus Pest Benchmark,
Edson Riberto Bollis, Helio Pedrini, Sandra Avila (UNICAMP) -
Fine-Grained Recognition in High-throughput Phenotyping,
Beichen Lyu, Stuart Smith, Keith Cherkauer (Purdue University) -
A Novel Technique Combining Image Processing, Plant Development Properties, and the Hungarian Algorithm, to Improve Leaf Detection in Maize,
Nazifa Khan, Ian McQuillan, Mark Eramian, Oliver A.S. Lyon (University of Saskatchewan, Queen's University) -
Farm Parcel Delineation Using Spatio-temporal Convolutional Networks,
Han Lin Aung, Burak Uzkent, Marshall Burke, David Lobell, Stefano Ermon (Stanford University) -
Climate Adaptation: Reliably Predicting from Imbalanced Satellite Data,
Ruchit Rawal, Prabhu Pradhan (NSIT, Max Planck Institute for Intelligent Systems)
-
Scoring Root Necrosis in Cassava Using Semantic Segmentation,
Joyce Nakatumba-Nabende, Jeremy Tusubira, Benjamin Akera, Ernest Mwebaze (Makerere University, Google) -
Active Learning Sampling Strategies For Weed Segmentation,
Andrei Polzounov, Anshul Tibrewal, Ben Cline, Olgert Denas, Chris Padwick, Jim Ostrowski (Blue River Technology) -
Convolutional Neural Networks for Image-based Corn Kernel Detection and Counting,
Saeed Khaki, Hieu Pham, Ye Han, Andy Kuhl, Wade Kent, Lizhi Wang (Iowa State University, Syngenta) -
Eff-PN: A Novel Deep Learning Approach for Anomaly Pattern Segmentation in Aerial Agricultural Images,
Shubham Innani, Prasad Dutande, Ujjwal R Baid, Bhakti Vilas Baheti, Sanjay Talbar (SGGSIET Nanded) -
Deep Unsupervised Deblurring Approach for Improving Crops Disease Classification,
Javed Ahmad, Fahad Shamshad, Junaid Maqbool, Ali Ahmed (Information Technology University) -
NDVI Vegetation Index Estimation Through an\ Unsupervised Deep Learning based Approach,
Patricia L Suarez, Angel Sappa, Boris Vintimilla (ESPOL POLYTECHNIC UNIVERSITY) -
Plants Disease Detection in Low Data Regime,
Shruti Jadon (Brown University (Rhode Island Hospital)) -
Semi-Supervised Crop Detection in UAV Orthophotographs,
Jana Wieme, Sam Leroux, Simon Cool, Ruben Van De Vijver, Wouter H. Maes, Jan Pieters (Ghent University) -
Counting Corn Stands using Drone Images from Cloud to Edge –A Deep Learning-based Object Detection Approach,
Zhiyi Li, Kshitiz Dhakal, Song Li (Virginia Tech)