In this paper, we ask, do Vision-Language Models (VLMs), an emergent human-computer interface, perceive visual illusions like humans? Or do they faithfully represent reality. We built VL-Illusion, a new dataset that systematically evaluate the problem. And among all other exciting findings, we found that although model’s humanlike rate is low under illusion, larger models are more susceptible to visual illusions, and closer to human perception.