Off Center
 
In this age of social media, sound bytes and ADHD, people love quick and catchy stats. Unfortunately, in the contact center and customer care space, there seem to be only a handful of snazzy stats in circulation. The same ones just keep getting regurgitated over and over (yes, that’s redundant), especially on Twitter.

This is perplexing considering how dynamic customer care is and how much contact centers have evolved. It’s actually worse than perplexing – it’s depressing. Every time I see someone tweeting the old chestnut , “Satisfied customers tell only 3 people about their experience, while dissatisfied customers tell 8-10 people” (or some variation of this), a part of my soul dies. I even wept a little just now while typing that stat.

Rather than just complain about the lack of statistical variety being promoted by self-proclaimed customer experience experts in the Twittersphere, I aim to remedy the situation. Following are several fresh and captivating stats about customer care and contact centers that I believe you and everybody else will feel compelled to talk and tweet about:

  • 86% of customers would be willing to pay more for better customer service. 100% of contact center managers would be willing to pay more for even mediocre customer service.  

  • 70% of contact centers list Average Handle Time among their key performance metrics at the agent level. Of those centers, 100% need a clue.

  • Only 17% of contact centers really mean it when they say “Your call is very important to us”. Of the remaining centers, 38% feel “Your call is somewhat important to us”, 24% feel “It’s surprising how unimportant your call is to us”, and 21% feel “It’s hilarious that you are still holding for a live agent.”

  • 73% of contact center managers claim to know how to accurately measure First-Call Resolution. The remaining 27% of managers are telling the truth.

  • Engaged customer service agents are 35% more likely to provide a positive customer experience than are customer service agents who are already married.

  • The top three criteria contact center managers consider when selecting work-at-home agents are: 1) Past performance; 2) ability to work independently; and 3) body odor.

  • Every time a caller must provide his/her name and account number to an agent after having just provided that exact same information via the IVR system, a puppy dies.

  • 97% of contact center agents fantasize daily about sending a hungry Bengal tiger to the home of abusive callers. The remaining 3% of agents fantasize daily about sending a hungry Siberian Tiger.

  • 81% of contact center agents are empowered to do exactly what their managers and supervisors tell them.

  • Each year, over 150 customer care professionals die from overexposure to acronyms.

  • 50% of managers feel their contact center is highly unprepared to handle social customer care; the remaining 50% do too.  

  • The three people that satisfied customers tell about their experience are Sue Johnson, Dave Winthrop, and Bud Carter. All three are tired of hearing about these experiences.

  • 42% of contact center managers say they will not hire an agent applicant unless said applicant has a pulse and/or can work at least one weekend shift a month.

  • Four out of five agents represent 80% of all agents. In contrast, the remaining agents represent only 20% of all agents.

  • The average agent-to-supervisor ratio in contact centers is 20:1. The odds that this is enough to provide agents with the coaching and support they need to succeed is 2000:1.

  • 100% of managers destined for greatness and wealth purchase a copy of the Full Contact e-book. 0% of managers understand why the author of said e-book looks so angry and aggressive in the photo on the book cover.



 
“Why is morale so low?”
“Why can’t we hang on to our best agents?”
“Why do we lose so many new-hires during or right after initial training?”
“Why are some of our agents carrying around voodoo dolls, and why am I suddenly experiencing such sharp pains in my face and back?”

If you often find yourself asking one or more of the above questions, it’s likely due to one or more of the following issues:

1) The metrics you measure (and enforce) are killing agents' spirit and the customer experience. Your agents bought into the “customer-centric” culture you sold them during recruiting and came on board excited to serve, but then the center started slamming them over the head with rigid Average Handle Time (AHT) objectives and Calls Per Hour (CPH) quotas their first day on the phones.

Focusing too strongly on such straight productivity metrics – and punishing agents for not hitting strict targets – kills agents' service spirit and compels them to do whatever is necessary to keep calls short and to handle as many as possible. This includes rushing callers off the phones before their issues are resolved, speeding through after-call work and making costly mistakes, and even occasionally pressing “release” to send unsuspecting customers into oblivion. You need to start emphasizing metrics like Contact Quality, Customer Satisfaction, First-Call Resolution, and Adherence to Schedule (the latter is a productivity-based metric your agents actually have control over). Do so, and you’ll be surprised how things like AHT and CPH end up falling in line anyway. Oh, and better do it soon – before your agents AND your customers decide to leave your company in the dust.   


2) Your quality monitoring program emphasizes the “monitoring” much more than the “quality”. Your supervisors and/or QA team are too focused on your internal monitoring form and not enough on how customers actually feel about the quality of the interaction they recently had with your center and agent. All agents see are subjective scores and checkmarks on a form that is likely better suited for measuring compliance than quality.

To get agents to embrace the quality monitoring process, let them have some input on what the form should contain, and, even more importantly, start incorporating direct customer feedback/ratings (from post-transaction surveys) into agents’ overall quality scores. For some reason, agents prefer it when a customer – rather than a supervisor – tells them how much their service stunk. Who knows, some agents might even try to improve.


3) Your contact center doesn’t fully embrace a culture of empowerment. Your contact center has failed to recognize and/or act on the fact that agents possess a wealth of insight, and know your customers better than anyone. It’s time to start empowering agents to use that insight and knowledge to improve existing processes and come up with new ones. This is probably the best way to continuously better the center while simultaneously making agents feel respected and valued. You’ll be amazed by the positive impact their ideas and suggestions will have on operational efficiencies, revenue and customer satisfaction. And because empowerment greatly increases engagement, you should see a big reduction in agent attrition and arson attempts.   


4) Coaching & training continuously get buried beneath the queue. Agents are eager to continuously develop and add value, but your overworked supervisors can’t find the time to stay on top of coaching and ongoing training. Your center needs to begin exploring feasible and effective ways to fit coaching and training into the schedule, such as using “just in time” e-learning modules, creating a peer mentoring program, and empowering agents to take on some supervisory tasks – which will free supervisors up to conduct more coaching and training while still giving them time to go home and visit their families on occasion.  


5) Agent rewards & recognition programs are uninspired – or non-existent. You’re merely going through the motions in terms of motivating and recognizing staff – futilely hoping that such stale incentives as cookies, balloons and gold stars will get agents to raise the roof performance-wise. It's time to revamp your agent rewards & recognition programs with proven approaches like: a Wall of Fame that pays tribute to consistent high performers; opportunities to serve on important committees or task forces; nominations for external industry awards for agents; fun happy hours where agents get to socialize and receive public praise for their concerted effort; and inspired events and contests during Customer Service Week and National Kiss Your Agents on the Mouth Day.     


6) You're handing the wrong people a headset. Maybe you are actually doing all the positive things I’ve suggested thus far, and are STILL struggling with low agent engagement and retention. Well, then you may want to take a close look at your recruiting and hiring practices. Regardless of how well you train, empower and reward staff, if you are attracting and selecting sociopaths and others who aren’t cut out for contact center work or your company culture, you’ll never foster the level of agent commitment or performance that’s required to become as good a customer care organization as your customers demand and deserve.   


A slightly different version of this post originally appeared on the “Productivity Plus” blog put out by the very good people at Intradiem.

 
True contact center success comes when organizations make the critical switch from a “Measure everything that moves” mindset to one of “Measure what matters most.” Given that we are now living in the Age of Customer Influence, “what matters most” is that which most increases the likelihood of the customer not telling the world how evil you are via Twitter.

No longer can companies coast on Average Handle Time (AHT) and Number of Calls Handled per Hour. Such metrics may have ruled the roost back when contact centers were back-office torture chambers, but the customer care landscape has since changed dramatically. Today, customers expect and demand service that is not only swift but stellar. A speedy response is appreciated, but only when it’s personalized, professional and accurate – and when what’s promised is actually carried out.

AHT and other straight productivity measurements still have a place in the contact center (e.g. for workforce management purposes as well as identifying workflow and training issues). However, in the best centers – those that understand that the customer experience is paramount – the focus is on a set of five far more qualitative and holistic metrics.

1) Service Level. How accessible your contact center is sets the tone for every customer interaction and determines how much vulgarity agents will have to endure on each call. Service level (SL) is still the ideal accessibility metric, revealing what percentage of calls (or chat sessions) were answered in “Y” seconds. A common example (but NOT an industry standard!) SL objective is 80/20.

The “X percent in Y seconds” attribute of SL is why it’s a more precise accessibility metric than its close cousin, Average Speed of Answer (ASA). ASA is a straight average, which can cause managers to make faulty assumptions about customers’ ability to reach an agent promptly. A reported ASA of, say, 30 seconds doesn’t mean that all or even most callers reached an agent in that time; many callers likely got connected more quickly while many others may not have reached an agent until after they perished.


2) First-Call Resolution (FCR). No other metric has as big an impact on customer satisfaction and costs (as well as agent morale) as FCR does. Research has shown that customer satisfaction (C-Sat) ratings will be 35-45 percent lower when a second call is made for the same issue.

Trouble is, accurately measuring FCR is something that can stump even the best and brightest scientists at NASA. (I discussed the complexity of FCR tracking in a previous post.) Still and all, contact centers must strive to gauge this critical metric as best they can and, more importantly, equip agents with the tools and techniques they need to drive continuous (and appropriate) FCR improvement.


3) Contact Quality and 4) C-Sat. Contact Quality and C-Sat are intrinsically linked – and in the best contact centers, so are the processes for measuring them. To get a true account of Quality, the customer’s perspective must be incorporated into the equation. Thus, in world-class customer care organizations, agents’ Quality scores are a combination of internal compliance results (as judged by internal QA monitoring staff using a formal evaluation form) and customer ratings (and berating) gleaned from post-contact transactional C-Sat surveys.

Through such a comprehensive approach to monitoring, the contact center gains a much more holistic view of Contact Quality than internal monitoring alone can while simultaneously capturing critical C-Sat data that can be used not only by the QA department but enterprise-wide, as well.


5) Employee Satisfaction (E-Sat). Those who shun E-Sat as a key metric because they see it as “soft” soon find that achieving customer loyalty and cost containment is hard. There is a direct and irrefutable correlation between how unhappy agents are and how miserable they make customers. Failure to keep tabs on E-Sat – and to take action to continuously improve it – leads not only to bad customer experiences but also high levels of employee attrition and knife-fighting, which costs contact centers an arm and a leg in terms of agent re-recruitment, re-assessment, re-training, and first-aid.

Smart centers formally survey staff via a third-party surveying specialist at least twice a year to find out what agents like about the job, what they’d like to see change, and how likely they are to cut somebody or themselves.


For much more on these and other common contact center metrics, be sure to check out my FULL CONTACT ebook at http://www.offcenterinsight.com/full-contact-book.html.


 
If the key call center metrics were to form a rock band, Forecast Accuracy would most likely be the bass player – less flashy and famous than its fellow members like C-Sat, FCR and Service Level, but no less critical for an effective performance.

Forecast Accuracy is sometimes referred to as “forecasted contact load vs. actual contact load”, but only by managers who like to make things more painful than necessary. The metric shows the percent variance between the number of calls (or chats) predicted to arrive during a given period and the number of contacts that the call center actually receives during that time. Most managers consider a 5% variance to be acceptable, though they naturally shoot for better (a lower %) than that. Those that regularly achieve a 15% variance or worse are sent directly to workforce management prison.



Missed it by That Much

So how exactly does one go about tracking Forecast Accuracy?

I’m glad I asked.

Call centers can retrieve data on forecasted contact load from whatever system or tool they use for forecasting (e.g., the center’s WFM system or Excel spreadsheets), then compare that to the data on the actual contact load received, which comes from the center’s ACD, email/chat management system as well as other report sources. The best call centers report forecast accuracy at the half-hour or hour interval level, rather than across days, weeks or months, as interval-level tracking gives a much clearer view of how horribly you botched the forecast.  

Accurate forecasting is paramount in any call center that gives a darn about customers, agents and cost efficiency. Without a measure in place to gauge the effectiveness of the center’s forecast, under-staffing can often occur, causing queues to fill with furious callers, furious callers to verbally eviscerate innocent agents, and innocent agents to throw fists through expensive equipment. Of course, all of this adds expensive seconds and minutes to wait and handle times, causing irritated executives to cut budgets and rescind their promise to add a window in the call center. 

Inaccurate forecasting may result in costly over-staffing, as well. And while this may make customers happy, it will certainly ire senior management -- as well as give agents too much free time between calls to think and figure out that they could probably earn more money making balloon animals for children in the park.


If you don’t have anything nice to say, share it in the comment box below.

Nice comments are also accepted – after careful review.