Seeing Faces in Things:
A Model and Dataset for Pareidolia

ECCV 2024

Paper Code Dataset Talk

Mark Hamilton, Simon Stent, Vasha DuTell, Anne Harrington, Jennifer Corbett, Ruth Rosenholtz, William T. Freeman

TL;DR: We introduce a dataset of over 5000 human annotated pareidolic images. We also link pareidolia in algorithms to the process of learning to detect animal faces.

Abstract

The human visual system is well-tuned to detect faces of all shapes and sizes. While this brings obvious survival advantages, such as a better chance of spotting unknown predators in the bush, it also leads to spurious face detections. "Face pareidolia" describes the perception of face-like structure among otherwise random stimuli: seeing faces in coffee stains or clouds in the sky. In this paper, we study face pareidolia from a computer vision perspective. We present an image dataset of "Faces in Things", consisting of five thousand web images with human-annotated pareidolic faces. Using this dataset, we examine the extent to which a state-of-the-art human face detector exhibits pareidolia, and find a significant behavioral gap between humans and machines. We explore a variety of different strategies to close this gap and discover that the evolutionary need for humans to detect animal faces, as well as human faces, explains some of this gap. Finally, we propose a simple statistical model of pareidolia in images. Through studies on human subjects and our pareidolic face detectors we confirm a key prediction of our model regarding what image conditions are most likely to induce pareidolia.

The Faces in Things Dataset

We introduce an annotated dataset of five thousand human labeled pareidolic face images, called ``Faces in Things''. Faces in Things is derived from the LAION-5B dataset and annotated for key face attributes and bounding boxes

Linking Face Pareidolia to Animal Face Detection

Using our dataset, we find that several modern face detectors do not experience pareidolia to the same extent humans do. Interestingly, we show that pareidolic face detection can be significantly improved by fine-tuning a face detector on animal faces. This effect accounts for roughly half of the gap between a pareidolia fine-tuned algorithm, and a human face fine-tuned algorithm. This sheds new light on why humans might experience pareidolia.  

We can even use the deep feature representation of our trained animal and pareidolia detector to compute animal-pareidolia doppelgangers:

Predicting Pareidolia in Humans and Machines

Why do some patterns and textures, like the surface of themoon, always seem to provoke pareidolia while others do not? To answer this question,we introduce a simple closed-form model of pareidolic face detection andanalyze its behavior. We find evidence of a "goldilocks" zone wherethe probability of pareidolia is maximized. Intuitively, this occurs when animage generation process is rich enough to match the main spatial modes of a facebut isn’t too rich to make it difficult to match these modes. We measure theexistence of this zone in both humans and machines.

Paper

Bibtex

@misc{
    hamilton2024seeingfacesthingsmodel,
    title={Seeing Faces in Things: A Model and Dataset for Pareidolia},
    author={Mark Hamilton and
            Simon Stent and
            Vasha DuTell and
            Anne Harrington and
            Jennifer Corbett and
            Ruth Rosenholtz and
            William T. Freeman},
    year={2024},
    eprint={2409.16143},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
    url={https://arxiv.org/abs/2409.16143},
}

Contact

For feedback, questions, or press inquiries please contact Mark Hamilton