Blog Practising Difficult Conversations with AI: A Guide for L&D Teams

Most managers dread difficult conversations and most employees avoid them. AI roleplay gives your team a safe, private space to practise before the real thing.

Practising Difficult Conversations with AI: A Guide for L&D Teams

Here’s a number that should worry every HR leader: 69% of managers say they’re uncomfortable communicating with their employees. Not about small talk — about the conversations that matter. Giving negative feedback. Addressing underperformance. Navigating conflict (Solomon, 2016, Harvard Business Review).

And it’s not just managers. Research from Bravely found that 70% of employees routinely avoid difficult conversations at work — about performance, growth, relationships, and culture. On both sides of the table, people would rather say nothing than say something hard.

The problem is that silence isn’t free.

The Cost of Avoidance

When people avoid difficult conversations, problems don’t go away. They compound.

A study by VitalSmarts (now Crucial Learning) found that each avoided crucial conversation costs an organisation an average of $7,500 and more than seven wasted workdays. One in five employees estimated the cost at over $50,000 per incident. And 40% admitted to wasting two or more weeks ruminating about the unaddressed problem (Grenny & Maxfield, 2016).

That’s the hidden tax of avoidance: not just the unresolved issue itself, but the hours of anxiety, gossip, disengagement, and workarounds that surround it.

Think about what gets avoided in your organisation. The sales rep who isn’t hitting targets. The team member who dominates meetings. The policy that nobody follows because nobody was willing to enforce it. The client relationship that’s souring because nobody addressed the scope creep.

Every one of those is a difficult conversation that someone decided not to have.

Why Training Alone Doesn’t Fix It

Most organisations recognise the problem. Many invest in training — workshops on feedback frameworks, courses on conflict resolution, eLearning modules on “crucial conversations.”

And the training makes sense intellectually. People leave the workshop nodding. They can explain the SBI model or the GROW framework. They pass the quiz.

Then Monday arrives and they avoid the conversation anyway.

The issue isn’t knowledge. It’s confidence and muscle memory. Knowing how to give tough feedback and actually being able to do it — in the moment, under pressure, with a real human reacting in real time — are completely different skills.

Research confirms this gap. Bradley & Campbell (2016) found that difficult workplace conversations “occur frequently and are generally dreaded,” but that using specific communication behaviours — empathy, non-judgmental framing, and protecting the other person’s dignity — significantly improves outcomes (Bradley & Campbell, 2016, International Journal of Business Communication).

Those behaviours are skills. And skills require practice.

The Problem with Traditional Roleplay

The L&D world has known for decades that roleplay is the best way to build conversational skills. Sales teams do it. Medical schools do it. Therapists train with it.

But traditional roleplay in a corporate setting has serious limitations:

  • It’s awkward. Practising a difficult conversation in front of colleagues feels exposing. Most people hold back rather than risk embarrassment.
  • It’s inconsistent. The quality depends entirely on whoever plays the other role. A colleague who breaks character or goes easy on you isn’t useful practice.
  • It doesn’t scale. You can’t run 200 individual roleplay sessions for a management development programme.
  • There’s no structured feedback. The facilitator might offer observations, but there’s no systematic evaluation of what worked and what didn’t.

So most organisations skip roleplay entirely and default to slides and quizzes. Which is how you end up with 69% of managers who are uncomfortable with the conversations their job requires.

AI Roleplay: Private, Consistent, and Scalable

This is where AI roleplay training changes the equation.

Instead of practising with a colleague (or not practising at all), the employee has a conversation with an AI character who’s been designed to behave like the person they’ll actually be talking to.

A defensive team member. An emotional direct report. A passive-aggressive colleague. A demanding client.

The AI stays in character. It reacts realistically — it might get upset, push back, deflect, or shut down, depending on how the employee handles the conversation. And because it’s private, the employee can be honest, make mistakes, and try different approaches without anyone watching.

This isn’t theoretical. A Stanford University study published at CHI 2024 tested exactly this approach. Participants who practised conflict scenarios using an AI simulation doubled their use of cooperative strategies and reduced competitive, escalating behaviours by 67% in a subsequent real conversation — compared to a control group who only studied conflict resolution theory (Shaikh et al., 2024, CHI Conference).

Reading about how to handle conflict didn’t change behaviour. Practising it did.

What This Looks Like in Practice

Here are the difficult conversations your teams can practise with AI roleplay:

Giving Performance Feedback

“I need to tell a team member that their work quality has dropped significantly over the last quarter. They’re likely to get defensive.”

The AI plays the team member — surprised, hurt, making excuses, maybe turning it back on the manager. The learner has to stay empathetic but direct, stick to specific examples, and guide the conversation toward a development plan.

Handling a Complaint

“A customer is angry about a delayed delivery. They’re threatening to switch to a competitor.”

The AI plays the customer — frustrated, interrupting, not interested in excuses. The learner has to listen, acknowledge, de-escalate, and offer a resolution without making promises they can’t keep.

“Two of my reports are in constant conflict and it’s affecting the team. I need to address it with one of them.”

The AI plays the team member who sees themselves as the victim. The learner has to stay neutral, focus on behaviours rather than personalities, and move toward a constructive outcome.

Saying No to a Senior Stakeholder

“My director wants me to take on a project that will derail my team’s priorities. I need to push back diplomatically.”

The AI plays the director — enthusiastic, politically powerful, not used to hearing “no.” The learner has to be assertive without being confrontational, offer alternatives, and protect their team’s capacity.

Delivering Redundancy or Restructuring News

“I need to tell a loyal, long-serving employee that their role is being made redundant.”

The AI reacts with shock, then sadness, then anger. The learner has to be compassionate, clear, and legally accurate — all at once.

In each scenario, the AI delivers detailed feedback afterwards: what the learner did well, where they lost empathy or clarity, and specific suggestions for next time.

Why L&D Teams Should Care

The 2024 Achievers Workforce Institute report found that two-thirds of employees want to have hard conversations at work, but managers are unprepared to facilitate them (Achievers, 2024).

That’s a demand signal. Employees aren’t asking for fewer difficult conversations — they’re asking for managers who can handle them.

AI roleplay gives L&D teams a way to close that gap:

  • Private practice — No embarrassment, no audience. People are more willing to try, fail, and learn.
  • Consistent quality — Every employee gets the same calibre of practice partner, regardless of location or team.
  • Measurable improvement — AI-generated feedback creates data on where people struggle and how they improve over time.
  • Always available — No scheduling facilitators, booking rooms, or flying people to workshops.
  • Repeatable — Employees can practise the same conversation multiple times with different approaches until they find what works.

Getting Started

If your organisation struggles with feedback culture, conflict resolution, or management capability — and most do — here’s how to start:

  1. Pick one conversation your managers dread most. Performance feedback is usually the safest starting point.
  2. Build a scenario that matches your real context — your company’s tone, your typical employee reactions, your policies.
  3. Roll it out privately — let managers practise on their own first, without pressure or observation.
  4. Review the feedback data — look for patterns in where people struggle and adjust your broader training accordingly.

The goal isn’t to replace human judgement with AI. It’s to give people enough practice that when the real conversation happens, they’ve already had it five times.

Ready to try it? See how organisations use AI roleplay for difficult conversations, sales, and compliance training, or start free with 2,000 credits.