The Deception of Apparent Intelligence: semantic pareidolia and loneliness in the age of AI

On the information portal hbritalia.it, philosopher and professor Luciano Floridi-director of the Center for Digital Ethics at Yale University-recently published an enlightening article that invites reflection on a phenomenon that is as widespread as it is underestimated: semantic pareidolia, or our tendency to project meaning, intentionality and even consciousness into artificial intelligence systems that are actually devoid of them.

In an age when chatbots and virtual assistants populate our daily lives, the risk of misunderstanding appearance for authenticity is increasingly real. But what drives us to see the human where there is none?

According to Floridi, this is not just a technological oversight. It is something deeper and more ingrained in the way we function as human beings. “Our inclination to perceive intelligence, consciousness and even emotions or mental states in systems that lack them” is a product of cognitive and emotional vulnerabilities that have been with us forever. Artificial intelligence, in fact, is not only built to perform tasks, but also to appear intelligent. And the more believable it is, the more we are inclined to forget that it is simulation.

Floridi introduces the concept of “semantic pareidolia,” an evolution of the psychological mechanism that makes us see faces in clouds or recognize animals in rocks. But if in visual pareidolia misinterpretation often remains harmless, in semantic pareidolia it can become dangerous: “We perceive intentionality where there is only statistics, meaning where there is only correlation.”

This mental projection is amplified by four converging trends: the rise of digital life (the “onlife” experience), the business strategies of big tech, increasing loneliness, and the constant improvement of AI systems. “The more we live onlife, the easier it will be to sell these systems as intelligent.” One example out of many is Replika, the chatbot that offers emotional support and simulated relationships. Millions of people use it, attributing real empathy and affection to it. “The fact that so many people find genuine emotional comfort in it raises profound ethical questions.”

And the phenomenon is evolving. Floridi warns that the next step will be the physical embodiment of chatbots: increasingly realistic humanoid robots with conversational AI. “We will interact with physical entities that speak, respond and simulate bodily presence,” making the line between reality and illusion even thinner.

The most disturbing point, according to the professor, is that this illusion can degenerate into technological idolatry. “When we attribute consciousness and understanding to systems that lack it, we risk delegating choices and ethical decisions to inadequate tools.” Some emerging religious movements already see AI as a transcendent entity. Such is the case with the Way of the Future or the Turing Church, which see artificial intelligence as a gateway to a new form of divinity. “We will see deities where there are only algorithms,” Floridi warns.

There are ethical and practical responses to this challenge. Some companies are demonstrating that one can innovate responsibly: OpenAI, Anthropic and DeepMind, for example, integrate alerts and collaborate with experts to avoid anthropomorphization and parasocial relationships. “Responsible innovation can become a competitive advantage.”

We need to train citizens who can distinguish between reality and simulation, between machine and mind. We need to promote critical awareness and rational thinking, especially in young people, who will be the main protagonists of the AI era.

Artificial intelligence is one of the greatest achievements of human ingenuity, but for that very reason it requires maturity, responsibility and attention. The real danger is not that systems become conscious, but that we stop distinguishing ourselves from them. Floridi reminds us that the challenge is not only technological, but first and foremost human and cultural.

In a world where illusion can become more powerful than reality, lucidity is needed so as not to confuse what appears intelligent with what really is. And values are needed, because intelligence-the real kind-is not measured in calculations, but in the ability to choose the good, even when deception is seductive.

The article The Deception of Apparent Intelligence: semantic pareidolia and loneliness in the age of AI comes from TheNewyorker.