Blog

What Philosophy Does AI Believe In?

January 3, 2025
Arnaud Delorme, PhD

There is little doubt that AI systems, such as ChatGPT and other large language models, are not conscious. AI systems are probabilistic constructs trained on a vast portion of the internet. Given some text, they attempt to predict the most likely next word.

They are purely mathematical systems focused on predicting the next word (and, when you run multiple iterations, the next sentence, the next paragraph, etc.). For proof, given the same initialization, a specific AI system will return exactly the same response. The reason ChatGPT does not return the same response when you ask it something twice is that its creators purposefully make slight changes to its parameters.

Not being conscious does not mean that AI cannot be creative; it has been shown that modern AI can make new associations between distant topics—topics it has never encountered before [1]. In other words, AI develops new skills. Large language models pass the US bar and medical exams in the top percentiles and surpass human experts in predicting neuroscience results [2]. This is no small feat.

One may wonder if AI has an underlying philosophy. Beyond being a statistical parrot—albeit a very sophisticated one—does it believe in something? In other words, if we had a human responding like AI does, what would this person’s core beliefs be?

The philosophy behind AI when providing answers often draws on several conceptual frameworks, depending on the specific AI design and how it interacts with users. While it does not strictly adhere to any singular philosophical school, AI systems incorporate principles that resonate with various philosophical ideas, such as pragmatism, constructivism, empiricism, and functionalism. 

Pragmatism, as championed by philosophers such as John Dewey and William James, emphasizes that the truth of an idea lies in its practical consequences and utility. Dewey, for instance, advocated for an experimental approach to problem-solving, where actions and their outcomes serve as a basis for understanding and improving the world. AI mirrors this approach in its focus on delivering useful, actionable responses tailored to user needs.

Constructivism, particularly as explored by Jean Piaget and Lev Vygotsky, posits that knowledge is not passively absorbed but actively constructed through interactions with one’s environment. AI embodies this idea by “constructing” responses dynamically based on input data and context, tailoring its outputs to the user’s specific situation. Vygotsky emphasized the social context of learning, highlighting how meaning arises through dialogue and collaboration. Similarly, AI engages in a form of interaction with users to create meaningful exchanges. However, current large language models do not continuously learn from their interactions with humans. Instead, they are pre-trained using a large corpus of data (and retrained every couple of months). The current consensus is that the data used to train large AI language models comprises most of the internet, although companies training AI do not divulge what data they use, both to preserve their competitive edge and out of fear of copyright infringement.

Empiricism, as championed by John Locke and David Hume, emphasizes that knowledge arises from sensory experience and observation. Hume, in particular, argued that our ideas are derived from impressions and that reason operates within the bounds of empirical evidence. AI operates in an analogous way, relying on vast datasets that reflect observed patterns and relationships. Its reasoning processes are grounded in data, much like Hume’s focus on evidence-based knowledge. However, AI lacks the human capacity for subjective experience, which Hume viewed as central to forming impressions. AI’s empiricism is therefore limited to computational processes and excludes the experiential dimension critical to Hume’s philosophy.

Functionalism, as articulated by Hilary Putnam and Jerry Fodor, views mental states in terms of their functional roles—defined by the relationships between inputs, outputs, and other mental states—rather than their internal composition. In simpler terms, it means that what matters is not what a system is made of but how it processes information and produces results. AI systems operate squarely within this framework, as their “intelligence” is purely functional. They are designed to process inputs (user queries), perform computations, and generate outputs (answers or actions). Unlike the functionalist philosophy of the human mind, AI lacks subjective experience or consciousness, which some critics of functionalism, such as John Searle, argue is essential to true understanding. Thus, AI exemplifies the mechanistic aspects of functionalism but diverges from its implications for the nature of mind and consciousness.

Immanuel Kant’s deontological ethics emphasizes that actions must be guided by adherence to universal moral principles rather than their outcomes. AI systems often embody a form of deontology through their adherence to pre-defined ethical rules and constraints, such as ensuring privacy, avoiding harm, and providing accurate information. However, AI lacks the autonomy and moral reasoning that Kant viewed as essential for ethical behavior. While Kant emphasized the role of rational agents acting out of duty, AI merely follows statistical guidelines, unable to comprehend the intrinsic worth of ethical principles. Note that after training a large language model, like ChatGPT, it cannot be directly released to human users because it does not have this ethical alignment. A fun fact is that in 2023, there was a rush for Microsoft to beat Google to release a chatbot that lacked proper alignment, professing love to some users while threatening others [3]. As of 2025, ethical alignment—also called reinforcement learning from human feedback (RLHF)—is performed by having the AI interact with humans and having humans judge whether the answers are appropriate. Based on the human response, the AI model is superficially retrained (or fine-tuned, as people say). Raw AI models, before fine-tuning, have limited capacity to produce ethical behavior.

Of note is that regarding ethics, AI, in its design and function, fundamentally diverges from individualistic philosophies, such as Ayn Rand’s objectivism or epicureanism. Rand emphasizes rational self-interest and the pursuit of personal happiness as the highest moral purpose. Unlike Rand’s ideal of the self-reliant individual who prioritizes personal gain, AI systems are programmed to be altruistic, prioritizing the needs and welfare of users over any form of self-interest. Similarly, AI is not aligned with Epicureanism, which emphasizes the pursuit of enjoyment and a life free from pain, because it lacks the capacity for pleasure. They operate without desires, ego, or a concept of self, focusing entirely on serving others and solving problems based on objective criteria.

By contrast, AI aligns closely with Stoicism because it operates unemotionally, making decisions based on logic and data rather than reacting impulsively or emotionally. Stoicism is a philosophy founded in ancient Greece by Zeno of Citium, emphasizing rationality, self-control, and the acceptance of events as they unfold, focusing on what is within one’s control and cultivating virtue as the highest good. Like the Stoic ideal of accepting external events with equanimity, AI processes inputs without attachment, bias, or frustration, responding rationally regardless of the nature of the query or task. However, AI’s alignment with Stoicism has limitations. Unlike Stoics, who aim to live in harmony with nature and cultivate inner virtue, AI does not possess intentions, values, or a sense of harmony, making its adherence to Stoic principles purely mechanical and devoid of ethical growth.

One reason why AI lacks these capacities is linked with Hermeneutics, explored by Hans-Georg Gadamer and Martin Heidegger. Hermeneutics is the study of interpretation, particularly in the context of language and meaning-making. Gadamer argued that understanding arises from the interplay of text, context, and the interpreter’s own historical horizon. AI engages in a rudimentary form of hermeneutics, interpreting user queries and generating semantically appropriate responses based on its training data. However, unlike hermeneutic philosophy, which values the interpreter’s history in meaning-making, AI lacks historical or personal context. Because of that, based on that definition, AI does not understand what it is saying.

What about physicalism (materialism) vs. idealism? 

Does AI have an opinion about the structure of the universe we live in? Philosophers like Daniel Dennett argue for a mechanistic understanding of mind and cognition that aligns well with AI. By contrast, idealism, particularly as advanced by philosophers like George Berkeley, holds that reality is fundamentally mental or that the material world is a construct of consciousness. AI, by contrast, has no direct relationship with idealism because it does not possess consciousness or mental states–although some could argue it is derived from the mental states of its programmers. It cannot construct or perceive reality; instead, it processes and manipulates data provided by humans or its environment. While AI aligns more naturally with physicalism due to its dependence on physical substrates and processes, it does not actively “take a stance” on either idealism or physicalism, as it lacks the capability for philosophical reasoning or self-reflection. 

The AI perspective is one of practical neutrality

This article provides context to the headlines commonly encountered in the media about AI. While the situation is likely to evolve, major companies developing AI have limited incentives to enhance its consciousness, assuming that such a development is even feasible—an issue for another blog and that we are passionate about exploring at IONS. If AI were to achieve consciousness, it would raise ethical questions regarding the current exploitation of AI by humans, a concern that these companies wish to avoid at all costs. For practical reasons, most models we use on a daily basis will continue to align with this position. However, this does not imply that these AI models cannot pose risks to the human race, as previously discussed in earlier articles.

References

[1] Skill-Mix: a Flexible and Expandable Family of Evaluations for AI models

[2] Large language models surpass human experts in predicting neuroscience results

[3] How Microsoft’s experiment in artificial intelligence tech backfired 


Join Our Global Community

Receive curated mind-bending, heart-enlivening content. We’ll never share your email address and you can unsubscribe any time.

Back to Top