In recent years technology has set itself no longer as a simple set of tools, but as a hegemon ideology that shapes society, politics and even our perception of time and identity. This idea is not new: already in the 1960s Jürgen Habermas (born 1929) had analyzed how science and technology can become a consolidated ideology in industrial societies, assuming the role of dominant cultural paradigm and masking the real processes of power and domination that operate behind innovations. The new critical edition of Science and Technology as an ideology, published in 2025 with a renewed critical apparatus, underlines this transformation: technology is not neutral, but it embodies a way of seeing the world that favours instrumental rationality with respect to democratic deliberation. Today this perspective is particularly urgent if we look at the role of artificial creative intelligence and its many applications that go far beyond the simple automation of tasks. Language, visual and sound generation models are able to produce new content – texts, images, music and even code – drawing on large complex datasets and algorithms, and this has redefined the boundary between human cultural production and automatic generation of meaning. In this dynamic technology is no longer a neutral instrument but a constituent factor of social and cognitive processes, affecting not only what we produce, but as we think, we communicate and make decisions.
The big technological corporations are the core of this transformation. Companies like Alphabet, Microsoft and Nvidia have managed to consolidate a position of power not only economic but epistemic: Google has integrated artificial intelligence into its search engine and global service platforms, thus expanding its ability to influence access to information, while Nvidia has become one of the most capitalized realities in the world thanks to the demand for GPU necessary for the training of IA models. This raises issues of economic power and data control, a central element when considering that the digital economy plays a central role in information, predictive models and the ability to extract value from huge sets of data.
Looking to the future, within the next ten years perspectives such as advanced artificial intelligence (AGI) and artificial intelligence could further redefine global economic dynamics. Forecasts of industry analysts suggest that generalized AI could emerge between 2035 and 2040, with cognitive capacities similar to or greater than human ones and with huge impacts on sectors such as medicine, logistics, security and military technology. In parallel, the convergence between IA and quantum technologies is accelerating technological evolution, creating an ecosystem in which these two pillars – quantum computational capacity and automatic intelligence – feed each other and produce disruptive innovations in areas such as material simulation, optimization of production processes and encryption.
In the field of biotechnology, the use of AI in genetic research and therapy is already offering surprising results, such as the possibility of designing safer and more accurate gene therapies in collaboration with major international technological actors, but these developments raise moral questions about the manipulation of living, the equity of access to care and the global governance of technologies that can alter the biological foundations of life.
Habermas invites us to question the fact that the standardization of technology in everyday life – from health to communication, from economy to knowledge – risks to reduce critical self-consciousness, favoring passive acceptance of technical solutions without a public debate on the values under the adoption of such technologies. This risk is amplified as digital technology also affects the structure of the public sphere itself: the fragmentation of speeches, the customization of algorithms and the polarization of information can erode the basis of democratic deliberation.
From here comes the tension between a hypertechnological future and a vision of deliberative societies: on the one hand the innovation of advanced software, IA systems integrated with increasingly sophisticated neural networks and multifunctional robots, and on the other the potential impact of quantum technologies that could, within a few years, rewrite the rules of computation and information. In addition to this, emerging applications such as advanced brain-computer interfaces and autonomous robotics, will change not only manual work but also cognitive and social work.
The picture that emerges, if interpreted through a critical lens inspired by Habermas, also opens darker scenarios. Some contemporary theoretical studies suggest that the uncontrolled domain of advanced IA can accentuate existing inequalities, concentrate economic power in the hands of few and even configure itself as a new form of “techno-feudalism” where AI becomes exclusive capital and economic democracy erodes. In this futuristic apocalyptic scenario, global power ratios would no longer be between national states but between technological blocks capable of managing IA infrastructure, data control and cognitive models.
Habermas himself, while not rejecting technological innovation, would underline that it must be accompanied by ethical rules, democratic deliberation and social control over technical processes. Technology must not overwhelm the ability of man to discuss, define collective purposes and purposes, but must be inserted in a framework of transparent and participatory governance, so that it does not turn into a system that acts before being understood by citizens.
Looking at technology in ten years, the future cannot be reduced to a simple mechanical progression of innovations: quantum technologies, artificial intelligence, biotechnology and even space exploration will be part of a complex socio-technical ecosystem in which times, meanings, values and power intertwine. The most extreme apocalyptic future is not inevitable, but remains a warning: without a public deliberation and ethical management of technologies, we risk delivering the evolution of society not to the community, but to those who hold the means and models that define it. This is the challenge posed by AI and technologies of the 21st century, a challenge that is at the same time technical, economic, political and deeply philosophical.
The article Generate the Future: IA, power and the ethical challenge of a hyperlinked world comes from IlNewyorkese.
