Skip to content

Benchmark your model on out-of-distribution datasets with carefully collected human comparison data

Notifications You must be signed in to change notification settings

wichmann-lab/model-vs-human

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

modelvshuman: Does your model generalise better than humans?

modelvshuman is a Python library to benchmark the gap between human and machine vision. Using this library, both PyTorch and TensorFlow models can be evaluated on 17 out-of-distribution datasets with high-quality human comparison data.

🏆 Benchmark

The top-10 models are listed here; training dataset size is indicated in brackets. Additionally, standard ResNet-50 is included as the last entry of the table for comparison. Model ranks are calculated across the full range of 52 models that we tested. If your model scores better than some (or even all) of the models here, please open a pull request and we'll be happy to include it here!

Most human-like behaviour

winner model accuracy difference ↓ observed consistency ↑ error consistency ↑ mean rank ↓
🥇 CLIP: ViT-B (400M) .023 .758 .281 1
🥈 SWSL: ResNeXt-101 (940M) .028 .752 .237 3.67
🥉 BiT-M: ResNet-101x1 (14M) .034 .733 .252 4
👏 BiT-M: ResNet-152x2 (14M) .035 .737 .243 4.67
👏 ViT-L (1M) .033 .738 .222 6.67
👏 BiT-M: ResNet-152x4 (14M) .035 .732 .233 7.33
👏 BiT-M: ResNet-50x1 (14M) .042 .718 .240 9
👏 BiT-M: ResNet-50x3 (14M) .040 .726 .228 9
👏 ViT-L (14M) .035 .744 .206 9.67
👏 SWSL: ResNet-50 (940M) .041 .727 .211 11.33
... standard ResNet-50 (1M) .087 .665 .208 29

Highest out-of-distribution robustness

winner model OOD accuracy ↑ rank ↓
🥇 ViT-L (14M) .733 1
🥈 CLIP: ViT-B (400M) .708 2
🥉 ViT-L (1M) .706 3
👏 SWSL: ResNeXt-101 (940M) .698 4
👏 BiT-M: ResNet-152x2 (14M) .694 5
👏 BiT-M: ResNet-152x4 (14M) .688 6
👏 BiT-M: ResNet-101x3 (14M) .682 7
👏 BiT-M: ResNet-50x3 (14M) .679 8
👏 SimCLR: ResNet-50x4 (1M) .677 9
👏 SWSL: ResNet-50 (940M) .677 10
... standard ResNet-50 (1M) .559 31

🔧 Installation

Simply clone the repository to a location of your choice and follow these steps:

  1. Set the repository home path by running the following from the command line:

    export MODELVSHUMANDIR=/absolute/path/to/this/repository/
    
  2. Install package (remove the -e option if you don't intend to add your own model or make any other changes)

    pip install -e .
    

🔬 User experience

Simply edit examples/evaluate.py as desired. This will test a list of models on out-of-distribution datasets, generating plots. If you then compile latex-report/report.tex, all the plots will be included in one convenient PDF report.

🐫 Model zoo

The following models are currently implemented:

If you e.g. add/implement your own model, please make sure to compute the ImageNet accuracy as a sanity check.

How to load a model

If you just want to load a model from the model zoo, this is what you can do:

    # loading a PyTorch model from the zoo
    from modelvshuman.models.pytorch.model_zoo import InfoMin
    model = InfoMin("InfoMin")

    # loading a Tensorflow model from the zoo
    from modelvshuman.models.tensorflow.model_zoo import efficientnet_b0
    model = efficientnet_b0("efficientnet_b0")
How to list all available models

All implemented models are registered by the model registry, which can then be used to list all available models of a certain framework with the following method:

    from modelvshuman import models
    
    print(models.list_models("pytorch"))
    print(models.list_models("tensorflow"))
How to add a new model

Adding a new model is possible for standard PyTorch and TensorFlow models. Depending on the framework (pytorch / tensorflow), open modelvshuman/models/<framework>/model_zoo.py. Here, you can add your own model with a few lines of code - similar to how you would load it usually. If your model has a custom model definition, create a new subdirectory called modelvshuman/models/<framework>/my_fancy_model/fancy_model.py which you can then import from model_zoo.py via from .my_fancy_model import fancy_model.

📁 Datasets

In total, 17 datasets with human comparison data collected under highly controlled laboratory conditions are available.

Twelve datasets correspond to parametric or binary image distortions. Top row: colour/grayscale, contrast, high-pass, low-pass (blurring), phase noise, power equalisation. Bottom row: opponent colour, rotation, Eidolon I, II and III, uniform noise. noise-stimuli

The remaining five datasets correspond to the following nonparametric image manipulations: sketch, stylized, edge, silhouette, texture-shape cue conflict. nonparametric-stimuli

How to load a dataset

Similarly, if you're interested in just loading a dataset, you can do this via:

   from modelvshuman.datasets import sketch      
   dataset = sketch(batch_size=16, num_workers=4)
How to list all available datasets
    from modelvshuman import datasets
    
    print(list(datasets.list_datasets().keys()))

💳 Credit

We collected psychophysical data ourselves, but we used existing image dataset sources. 12 datasets were obtained from Generalisation in humans and deep neural networks. 3 datasets were obtained from ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. Additionally, we used 1 dataset from Learning Robust Global Representations by Penalizing Local Predictive Power (sketch images from ImageNet-Sketch) and 1 dataset from ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness (stylized images from Stylized-ImageNet).

We thank all model authors and repository maintainers for providing the models described above.

About

Benchmark your model on out-of-distribution datasets with carefully collected human comparison data

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 89.2%
  • TeX 10.8%