🧠 About Hallucination Prediction
This tool predicts human visual hallucinations using generative inference with adversarially robust neural networks. Robust models develop human-like perceptual biases, allowing them to forecast what perceptual structures humans will experience.
Prediction Methods:
Prior-Guided Drift Diffusion (Primary Method)
Starting from a noisy representation, the model converges toward what it expects to perceive—revealing predicted hallucinations
IncreaseConfidence
Moving away from unlikely interpretations to reveal the most probable perceptual experience
Parameters:
- Drift Noise: Initial uncertainty in the prediction process
- Diffusion Noise: Stochastic exploration during prediction
- Update Rate: Speed of convergence to the predicted hallucination
- Number of Iterations: How many prediction steps to perform
- Model Layer: Which perceptual level to predict from (early edges vs. high-level objects)
- Epsilon (Stimulus Fidelity): How closely the prediction must match the input stimulus
Why Does This Work?
Adversarially robust neural networks develop perceptual representations similar to human vision. When we use generative inference to reveal what these networks "expect" to see, it matches what humans hallucinate in ambiguous images—allowing us to predict human perception.
Developed by Tahereh Toosi