Computer Science

Would you trust AI to mediate an argument?

Researchers from Google DeepMind recently trained a system of large language models to help people come to agreement over complex but important social or political issues. The AI model was trained to identify and present areas where people’s ideas overlapped. With the help of this AI mediator, small groups of study participants became less divided in their positions on various issues. You can read more from Rhiannon Williams here.   

One of the best uses for AI chatbots is for brainstorming. I’ve had success in the past using them to draft more assertive or persuasive emails for awkward situations, such as complaining about services or negotiating bills. This latest research suggests they could help us to see things from other people’s perspectives too. So why not use AI to patch things up with my friend? 

I described the conflict, as I see it, to ChatGPT and asked for advice about what I should do. The response was very validating, because the AI chatbot supported the way I had approached the problem. The advice it gave was along the lines of what I had thought about doing anyway. I found it helpful to chat with the bot and get more ideas about how to deal with my specific situation. But ultimately, I was left dissatisfied, because the advice was still pretty generic and vague (“Set your boundary calmly” and “Communicate your feelings”) and didn’t really offer the kind of insight a therapist might. 

And there’s another problem: Every argument has two sides. I started a new chat, and described the problem as I believe my friend sees it. The chatbot supported and validated my friend’s decisions, just as it did for me. On one hand, this exercise helped me see things from her perspective. I had, after all, tried to empathize with the other person, not just win an argument. But on the other hand, I can totally see a situation where relying too much on the advice of a chatbot that tells us what we want to hear could cause us to double down, preventing us from seeing things from the other person’s perspective. 

This served as a good reminder: An AI chatbot is not a therapist or a friend. While it can parrot the vast reams of internet text it’s been trained on, it doesn’t understand what it’s like to feel sadness, confusion, or joy. That’s why I would tread with caution when using AI chatbots for things that really matter to you, and not take what they say at face value. 

An AI chatbot can never replace a real conversation, where both sides are willing to truly listen and take the other’s point of view into account. So I decided to ditch the AI-assisted therapy talk and reached out to my friend one more time. Wish me luck! 


Deeper Learning

OpenAI says ChatGPT treats us all the same (most of the time)

Does ChatGPT treat you the same whether you’re a Laurie, Luke, or Lashonda? Almost, but not quite. OpenAI has analyzed millions of conversations with its hit chatbot and found that ChatGPT will produce a harmful gender or racial stereotype based on a user’s name in around one in 1,000 responses on average, and as many as one in 100 responses in the worst case.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button