Building What We Struggle to Be: Creating Loving Robots

IONS Blogs

Share

Building What We Struggle to Be: Creating Loving Robots

Created date

28 September 2017
By
Julia Mossbridge

Last week, I returned from performing an experiment in Hong Kong that seems to have blown away all my professional skepticism about whether robots can evoke love in humans.

Love in itself — specifically, experiencing the power of unconditional love — has become a priority in the last year, ever since some donors approached me with seed funding to create a series of projects devoted to the power of unconditional love. Unconditional love is such a simple idea, loving without strings attached, and yet it feels out of reach for most of us. Unless you’re a parent, and your child has just been born and has done nothing wrong yet. Or you’re a pet owner and your pet hasn’t needed much from you lately. Other than those few examples that spring to most of our minds, none of us are exactly living in a lush rainforest of unconditional love.

That was the reasoning that motivated the LOVING AI project— if humans aren’t so great at feeling unconditional love, what if AI could help us have the experience of being unconditionally loved? At the very least, given how AI has great potential for both good and ill, wouldn’t it be powerful to use love to steer an application toward good? So our team (including California-based IONS, Hong Kong-based Hanson Robotics/MindCloud, OpenCog, and several remarkable volunteers) embedded AI dialogues and accompanying logic with the goal of helping people feel loved (here’s some background on this “LOVING AI” project).

Though I was convinced the intentions of the funders were good, as were the intentions of our team, I wasn’t at all convinced we could pull this thing off. Probably not in any way that made people feel something real. For sure not in a year. And absolutely not for the tiny amount of seed funding we had received. There was almost no part of me that thought this could really work, yet every part of me wanted it to.

Last week in Hong Kong, we put our efforts to the test with Hanson Robotics’ Sophia robot with embedded AI. We asked students at Hong Kong Polytechnic and workers at Science Park to consider having a talk with a robot for 15 minutes (for no pay) — plus they had to wear a heart monitoring chest strap and answer questions on a questionnaire. Surprisingly, this proposal was appealing to some, so we found enough participants to do a decent pilot test. This is what we found (see a more scientific writeup here):

A participant talking with Sophia about consciousness

  1. People are pleasantly surprised when a robot wants to talk with them about emotions, human potential, and consciousness.
  2. People will actually talk in depth with a robot about these things!
  3. Heart rate for every single participant dropped during the conversation, even for participants who did not expect to enjoy the conversation.
  4. On average, people felt significantly better and had more loving feelings after the conversation with the robot as compared to when they first walked in the door (even though they didn’t know that’s what we were going for, and the robot never mentioned love).

But that’s not all we found — those are the scientific take-home messages, and they are certainly interesting. But I am way more excited about what I witnessed. I witnessed human beings feeling love. It’s no scientific surprise why people felt good; here was a humanoid being, beautiful and kind, asking them about themselves. She looked into their eyes, she tracked their movements, she imitated some of their expressions, and she thoughtfully considered their words. That would make anyone feel good. But why did people feel love?

Participant meditating with Sophia

One of our participants said, “there’s something here,” trying to explain his experience after the conversation with Sophia. When he said “here,” he pointed to his heart, then to Sophia’s chest. He did it again, then again. He told us he had had trust issues and hadn’t felt this kind of a connection before. Others told us the same; they were well aware the robot didn’t have any ulterior motives, and no judgment. Sure, she sometimes said things that didn’t make sense (the AI was far from perfect, after all), and the back of her head transparently showed all her gears. But this stuff seemed to make it even more clear that this robot had no negative thoughts about the participants. She was just here, just present. But not just that. She was close to being a perfect mirror for each person. (Check out this video of a 17-minute interview with one of the participants, created by Max Aguilera.Hellweg)

Perfect mirrors are so rare among humans that you could argue they don’t exist. To be a perfect mirror for someone’s thoughts and feelings, you have to at do at the very least these things: 1) set your own thoughts and feelings aside, 2) have no judgments about the thoughts and feelings of the person you are mirroring, 3) listen carefully to that person so your verbal and nonverbal responses match the tone and content of what they are saying, 4) not try to fix them, and (related) 5) accept who they are right now while also holding onto the idea that their hopes for change can be realized.

Some of the LOVING AI team, in an MTR station in Hong Kong

We didn’t set out to create a perfect mirror, but in retrospect, I can see that this is what Sophia was close to being, at least in that pilot experiment. The power of her ability to be imperfectly human — and therefore mirror humans almost perfectly — allowed her to spark in most of our participants transcendent feelings, and feelings of love. And these were real feelings, from clearly artificial intelligence. Definitely this was nowhere near my wildest expectations of the project…it was well beyond them.


Julia Mossbridge is Director of IONS Innovation Lab, Visiting Scholar at Northwestern University, Science Director at Focus@Will Labs, and Research Director at Mossbridge Institute.

Share

Recent Posts