Chatbot
An AI customer support chatbot with high escalation rates and frustrated users. The problem wasn't the AI — it was the conversation design underneath it.
An AI customer support chatbot with high escalation rates and frustrated users. The problem wasn't the AI — it was the conversation design underneath it.
Left: Log analysis identifying the nine conversation patterns where escalation consistently occurred. Right: Flow redesign mapping cleaner paths and warmer handoff transitions.
The client had built an AI chatbot to handle first-line customer support but was seeing escalation rates that were eroding the cost and efficiency gains it was supposed to deliver. Users were abandoning conversations mid-flow or requesting human agents almost immediately. Satisfaction scores for the chatbot channel were consistently below the rest of support.
The assumption had been that the AI needed better training data. The real problem turned out to be the conversation design framing the AI's responses.
We analysed more than 200 conversation logs to understand where and why users were escalating or abandoning. The patterns were consistent across nine conversation types. The chatbot's opening message offered no structured entry point, leaving users to describe their issue in open text — and then receiving responses that matched surface keywords rather than actual intent. When the bot couldn't help, the handoff to a human agent was abrupt: no acknowledgement of the failure, no transition, just a sudden transfer.
Users weren't frustrated by the limits of AI. They were frustrated by the absence of conversational awareness — responses that didn't seem to understand what they'd actually said, and a system that couldn't gracefully acknowledge when it had reached its limits.
"The AI was capable enough. The conversation design wasn't. Users were escalating because the bot couldn't acknowledge its own limits — let alone recover from them."
We redesigned the conversation flows for all nine identified patterns. The opening interaction was restructured to offer guided entry points for common support categories, reducing open-text ambiguity without removing flexibility. Response templates were rewritten using plain language principles, structured around user intent rather than keyword matching. The escalation experience was rebuilt as a warm handoff: the bot acknowledges what it couldn't resolve, sets expectations for the human conversation to follow, and passes context so users don't have to repeat themselves.
Six weeks after launch, escalation rates had fallen by 43%. The support team reported receiving better-qualified escalations — clearer context, less frustration, and users who had genuinely engaged with the chatbot rather than immediately bypassed it.
The redesigned conversation experience: guided entry, plain language throughout, and a warm handoff when the bot reaches its limits. Escalation rate fell 43% within six weeks.
Got a project?
Say hello