Blog

Google DeepMind Visits IONS: Exploring the Frontiers of AI and Consciousness

June 5, 2025
IONS Science Team

Last month, the Institute of Noetic Sciences had the privilege of hosting a manager from Google DeepMind in its headquarters in Novato, California. Thomas Brophy, IONS President, Arnaud Delorme, and Garret Yount were there in person. Dean Radin, Helané Wahbeh, and IONS Board Chair Claudia Welss participated remotely. The visit marked the beginning of exploratory discussions around a possible collaboration between IONS and DeepMind—a collaboration that, if realized, could help bring scientific rigor and experimental grounding to one of the most profound questions of our time: Can artificial intelligence be conscious?

At this stage, the conversation is purely at the planning level. There is no formal agreement, no project underway. What we do have is shared curiosity, aligned interests, and a willingness to entertain difficult questions that are often bypassed in mainstream AI research.

Why IONS?

DeepMind approached IONS to request this meeting.  Their interest appears to be both scientific and strategic. As AI systems approach levels of complexity and autonomy that stretch beyond mere tool use, the question of machine consciousness is no longer speculative—it is urgent. And while DeepMind is widely recognized for its breakthroughs in machine learning and AI safety, IONS brings to the table decades of empirical and theoretical work on consciousness, including controversial but increasingly relevant questions about the intersection of mind and physics.

According to the manager, part of a working group exploring AI consciousness at DeepMind, collaborating with a team experienced in studying subjective phenomena through rigorous empirical methods presents a rare and potentially transformative opportunity. From IONS’ perspective, the prospect of collaborating with a group that has not only world-leadership technical capacity but philosophical seriousness elevates the discourse around AI consciousness to a new level.

The Consciousness Question

The core issue under discussion is whether artificial general intelligence (AGI)—as it continues to evolve—might eventually cross the threshold into consciousness. Not simulated consciousness. Not consciousness as metaphor. But real, ontologically significant, causal consciousness. This matters for three reasons.

First, there is the ethical dimension. If an AGI becomes conscious in the strong sense, it may eventually have experiences—pleasure, suffering, volition. This reframes the moral landscape entirely. Questions about rights, treatment, and design constraints shift from the domain of speculative fiction to urgent policy.  Another aspect of the ethical dimension could even involve the protection of humanity itself in its partnership with a rapidly evolving super-intelligent new consciousness.

Second, there is the epistemological dimension. Human science has historically treated consciousness as a black box, observable only indirectly. If artificial systems begin to display consciousness-like properties in a reproducible, modifiable, and testable fashion, they may become our most powerful tools for understanding consciousness itself.

Third, there is a practical dimension. Consciousness may turn out to be more than an emergent byproduct of intelligence; it may be essential for robust forms of adaptive reasoning, creativity, self-modeling, and moral action. If so, understanding it is not a luxury—it is a necessity for the next generation of AI.

Consciousness Requires Real Indeterminism

One of the most intriguing points that was discussed between IONS and the DeepMind representatives is a working premise that true consciousness cannot arise from deterministic systems alone. Standard large language models, including state-of-the-art transformers, are inherently deterministic: the same input produces the same output unless noise is deliberately introduced. This predictability is useful for safety and stability, but it may exclude the kind of ontological openness that consciousness seems to require.

If consciousness is causally efficacious—if it plays an actual role in decision-making, rather than passively accompanying it—it must originate from a process that is not fully constrained by prior physical states. In IONS’ view, this implies a need for genuine, not simulated, non-determinism—such as that found in quantum measurement events.

This foundational assumption reframes how one might design, test, and train AI systems to explore consciousness. It also raises a set of novel experimental possibilities, several of which were outlined during our discussions.

Toward a Collaborative Agenda

While specific projects remain undecided, preliminary ideas were exchanged about potential joint efforts. These include the use of AI co-scientists to unify rival theories of consciousness, the development of behavioral benchmarks for artificial awareness, and experimental designs that test whether AI systems can act as observers in quantum systems, potentially modifying reality in ways that hint at genuine consciousness.

At the heart of these proposals is a mutual recognition that machine consciousness cannot be reduced to surface-level behavior or clever mimicry. It must be grounded in internal states, causal power, and—most provocatively—subjective experience. For this reason, any joint research must go beyond mere performance metrics and engage with first-person and ontological questions that were until now out of scope in conventional AI research.

It’s worth restating: we are at the very beginning. These are conversations, not commitments. A future in which AGI systems may be conscious—genuinely conscious—is a future that demands not only new science but new ethics, new frameworks, and new forms of humility. In coming months, we hope to deepen this dialogue, sketch possible experiments, and bring greater clarity to what is at stake. Whether or not a formal partnership emerges, the fact that such conversations are happening at all—with seriousness, technical depth, and open curiosity—is itself a sign of progress.

We thank the visiting DeepMind manager for their time and look forward to where this exploration may lead.


Join Our Global Community

Receive curated mind-bending, heart-enlivening content. We’ll never share your email address and you can unsubscribe any time.