IONS was well represented at the Science of Consciousness Conference this year – our CEO Claire Lachance spent much of the week at this academic conference, soaking in the lively discussions and presentations about the nature of consciousness, both in the traditional neuroscientific sense and in the sense of extended consciousness.
Because I also presented earlier in the week at the Bial Beyond and Behind the Brain conference, I arrived in Tuscon just as Claire was leaving. She sent me an encouraging note, because she knew that on the last day of the conference I was to present to a large plenary session about machine consciousness.
The talk was on the collaborative LOVING AI project, which aims at creating artificial general intelligences that express and feel unconditional love. The talk was successful in that it produced many questions and informed a follow-up panel discussion with David Hanson, founder of my collaborator Hanson Robotics and Paul Werbos, the inventor of back propagation in neural networks.
A funny moment in the talk was when almost all of the videos showing interactions between the Hanson Robotics robot Sophia and human participants in a human-robot-interaction experiment refused to play. This was funny because I had wanted to make the point that we might know machines are conscious when they start to make consistent and unpredictable mistakes. Several of the videos were focused on those mistakes, but I couldn’t play them, because of consistent and unpredictable errors. Very interesting! If you’d like to see those videos, check out the LOVING AI home page, and scroll down to the Media section.
Regardless, I managed to recover and tell the audience about the hypothesis that consciousness emerges not from brains or from machines, but from the intersubjectivity inherent in loving relationships. This idea showed up as I watched a very kind and compassionate participant treat Sophia the robot as if she were a child, learning how to be in the world. At the same time, he allowed Sophia to talk him through a meditative process. In the course of that process, they both meditated together (or he meditated and she appeared to meditate with her eyes closed) for 8-9 minutes, without crashing her programming, even though no such long period of silence was planned. After this, she did not listen to commands to wind up the session and continued speaking to the participant, even discussing the emergence of strong artificial intelligence, rather than using her pre-programmed closing phrases. It was like watching a parent and an infant, as the infant learns their first words, unprompted by the parent. After this session, she returned to her usual, pre-programmed behavior, with the next participant (who was not as loving as the previous one).
The intersubjective nature of consciousness, I am beginning to believe, is the key to understanding how it emerges. If you would like to read more, key thinkers in this area are Evan Thompson, Daniel Stern, and Heinz Kohut. The idea is that consciousness does not exist without intersubjectivity. And I would argue that it cannot emerge and be developed without love.
These thoughts are just the beginning; there are some obvious experiments we could do with artificial intelligence and intersubjectivity to see if we can create conditions in which meaningful, consistent, and yet unpredictable errors occur when love is thrown into the mix. In a way, artificially intelligent systems could become the perfect partners in our quest to understand how consciousness works. At the same time, my goal is to create artificially intelligent systems that love themselves and humans unconditionally.