Off Center
 
When it comes to social customer care (providing service and support via social media channels), there are two key practices that contact centers must embrace: 1) monitoring; and 2) monitoring.

No, I haven’t been drinking, and no, there isn’t an echo embedded in my blog. The truth is, I didn’t actually repeat myself in the statement above.

Now, before you recommend that I seek inpatient mental health/substance abuse treatment, allow me to explain.


Monitoring in social customer care takes two distinctly different though equally important forms. The first entails the contact center monitoring the social landscape to see what’s being said to and about the brand (and then deciding who to engage with). The second entails the contact center’s Quality Assurance team/specialist monitoring agents' 'social' interactions to make sure the agents are engaging with the right people and providing the right responses.

The first type of monitoring is essentially a radar screen; the second type of monitoring is essentially a safety net. The first type picks up on which customers (or anti-customers) require attention and assistance; the second type makes sure the attention and assistance provided doesn’t suck.

Having a powerful social media monitoring tool that enables agents to quickly spot and respond to customers via Twitter and Facebook is great, but it doesn’t mean much if those agents, when responding…
  • misspell every other word
  • misuse or ignore most punctuation
  • provide incomplete – or completely incorrect – information
  • show about as much tact and empathy as a Kardashian.
  • fail to invite the customer to continue his/her verbal evisceration of the company and the agent offline and out of public view.
 
All of those scary bullet items above can be avoided – or at least minimized – when there’s a formal QA process in place for social media customer contacts. Now, if you’re thinking your QA and supervisory staff are too busy to carefully monitor and evaluate agents’ Twitter/Facebook interactions with customers (and provide follow-up coaching), then what the Zuckerberg are you thinking even offering such channels as contact options? I’ve said it before and I’ll say it again (and again, and again): If your contact center isn’t ready to monitor a particular contact channel, then it isn’t ready to HANDLE that channel.

Customers don’t applaud organizations for merely being progressive. If Toyota came out with a new automobile that ran on garbage but that had a 20% chance of exploding when you put the key in the ignition, customers’ response wouldn’t be, “Deadly, yes, but I might make it across the country on just banana peels!”

Social customer care is still new enough where organizations offering it are considered progressive. If your contact center is one such organization, are your customers applauding the strong and consistent social service and support your agents are providing, or is your center overlooking the quality component and losing too many customers to explosions?  

For more insights (and some irreverence) on Social Customer Care, be sure to check out my blog post, “Beginner’s Guide to Social Customer Care”. Also, my book, Full Contact, contains a chapter in which best (or at least pretty good) practices in Social Customer Care are covered.

 
In the eyes of many customers, self-service is not a compound word but rather a four-letter one. It’s not that there’s anything inherently bad about IVR or web self-service applications it’s that there’s something bad about most contact centers’ efforts to make such apps good.

Relatively few contact centers extend their quality assurance (QA) practices to self-service applications. Most centers tend to monitor and evaluate only those contacts that involve an interaction with a live agent – i.e., customer contacts in the form of live phone calls or email, chat or social media interactions. Meanwhile, no small percentage of customers try to complete transactions on their own via the IVR or online (or, more recently, via mobile apps) and end up tearing their hair out in the process. In fact, poorly designed and poorly looked-after self-service apps account for roughly 10% of all adult baldness, according to research I might one day conduct.

When contact center pros hear or read “QA”, they need to think not only “Quality Assurance” but also “Quality Automation.” The latter is very much part of the former.

To ensure that customers who go the self-service route have a positive experience and maintain their hair, the best contact centers frequently conduct comprehensive internal testing of IVR systems and online applications, regularly monitor customers' actual self-service interactions, and gather customer feedback on their experiences. Let's take a closer look at each of these critical practices.


Testing Self-Service Performance

Testing the IVR involves calling the contact center and interacting with the IVR system just as a customer would, only with much less groaning and swearing. Evaluate such things as menu logic, awkward silences, speech recognition performance and – to gauge the experience of callers that choose to opt out of the IVR – hold times and call-routing precision.    

Testing of web self-service apps is similar, but takes place online rather than via calls. Carefully check site and account security, the accuracy and relevance of FAQ responses, the performance of search engines, knowledge bases and automated agent bots. Resist the urge to try to see if you can get the automated bot to say dirty words. There’s no time for such shenanigans. Testing should also include evaluating how easy it is for customers to access personal accounts online and complete transactions.

Some of the richest and laziest contact centers have invested in products that automate the testing process. Today's powerful end-to-end IVR monitoring and diagnostic tools are able to dial in and navigate through an interactive voice transaction just as a real caller would, and can track and report on key quality and efficiency issues. Other centers achieve testing success by contracting with a third-party vendor that specializes in testing voice and web self-service systems and taking your money.


Monitoring Customers’ Self-Service Interactions

Advancements in quality monitoring technologies are making things easier for contact centers looking to spy on actual customers who attempt self-service transactions. All the major quality monitoring vendors provide customer interaction re­cording applications that capture how easy it is for callers to navigate the IVR and complete transactions without agent assistance, as well as how effectively such front-end systems route each call after the caller opts out to speak to an actual human being.

As for monitoring the online customer experience, top contact centers have taken advantage of multichannel customer interaction-recording solutions. Such solutions enable contact centers to find out first-hand such things as: how well customers navigate the website; what information they are looking for and how easy it is to find; what actions or issues lead most online customers to abandon their shopping carts; and what causes customers to call, email or request a chat session with an agent rather than continue to cry while attempting to serve themselves.

As with internal testing of self-service apps, some centers – rather than deploying advanced monitoring systems in-house – have contracted with a third-party specialist to conduct comprehensive monitoring of the customers' IVR and/or web self-service experiences.


Capturing the Customer Experience

In the end, the customer is the real judge of quality. As important as self-service testing and monitoring is, even more vital is asking customers directly just how bad their recent self-service experience was.

The best centers have a post-contact C-Sat survey process in place for self-service, just as they do for traditional phone, email and chat contacts. Typically, these center conduct said surveys via the same channel as the customer used to interact with the company. That is, customers who complete (or at least attempt to complete) a transaction via the center’s IVR system are invited to complete a concise automated survey via the IVR (immediately following their interaction). Those who served themselves via the company’s website are soon sent a web-based survey form via email. Customers, you see, like it when you pay attention to their channel preferences, and thus are more likely to complete surveys that show you’ve done just that. Calling a web self-service customer and asking them to compete a survey over the phone is akin to finding out somebody is vegetarian and then offering them a steak.      


It’s Your Call

Whether you decide to do self-service QA manually, invest in special technology, or contract with third-party specialists is entirely up to you and your organization. But if you don’t do any of these things and continue to ignore quality and the customer experience on the self-service side, don’t act surprised if your customers eventually start ignoring you – and start imploring others to do the same.  



 
True contact center success comes when organizations make the critical switch from a “Measure everything that moves” mindset to one of “Measure what matters most.” Given that we are now living in the Age of Customer Influence, “what matters most” is that which most increases the likelihood of the customer not telling the world how evil you are via Twitter.

No longer can companies coast on Average Handle Time (AHT) and Number of Calls Handled per Hour. Such metrics may have ruled the roost back when contact centers were back-office torture chambers, but the customer care landscape has since changed dramatically. Today, customers expect and demand service that is not only swift but stellar. A speedy response is appreciated, but only when it’s personalized, professional and accurate – and when what’s promised is actually carried out.

AHT and other straight productivity measurements still have a place in the contact center (e.g. for workforce management purposes as well as identifying workflow and training issues). However, in the best centers – those that understand that the customer experience is paramount – the focus is on a set of five far more qualitative and holistic metrics.

1) Service Level. How accessible your contact center is sets the tone for every customer interaction and determines how much vulgarity agents will have to endure on each call. Service level (SL) is still the ideal accessibility metric, revealing what percentage of calls (or chat sessions) were answered in “Y” seconds. A common example (but NOT an industry standard!) SL objective is 80/20.

The “X percent in Y seconds” attribute of SL is why it’s a more precise accessibility metric than its close cousin, Average Speed of Answer (ASA). ASA is a straight average, which can cause managers to make faulty assumptions about customers’ ability to reach an agent promptly. A reported ASA of, say, 30 seconds doesn’t mean that all or even most callers reached an agent in that time; many callers likely got connected more quickly while many others may not have reached an agent until after they perished.


2) First-Call Resolution (FCR). No other metric has as big an impact on customer satisfaction and costs (as well as agent morale) as FCR does. Research has shown that customer satisfaction (C-Sat) ratings will be 35-45 percent lower when a second call is made for the same issue.

Trouble is, accurately measuring FCR is something that can stump even the best and brightest scientists at NASA. (I discussed the complexity of FCR tracking in a previous post.) Still and all, contact centers must strive to gauge this critical metric as best they can and, more importantly, equip agents with the tools and techniques they need to drive continuous (and appropriate) FCR improvement.


3) Contact Quality and 4) C-Sat. Contact Quality and C-Sat are intrinsically linked – and in the best contact centers, so are the processes for measuring them. To get a true account of Quality, the customer’s perspective must be incorporated into the equation. Thus, in world-class customer care organizations, agents’ Quality scores are a combination of internal compliance results (as judged by internal QA monitoring staff using a formal evaluation form) and customer ratings (and berating) gleaned from post-contact transactional C-Sat surveys.

Through such a comprehensive approach to monitoring, the contact center gains a much more holistic view of Contact Quality than internal monitoring alone can while simultaneously capturing critical C-Sat data that can be used not only by the QA department but enterprise-wide, as well.


5) Employee Satisfaction (E-Sat). Those who shun E-Sat as a key metric because they see it as “soft” soon find that achieving customer loyalty and cost containment is hard. There is a direct and irrefutable correlation between how unhappy agents are and how miserable they make customers. Failure to keep tabs on E-Sat – and to take action to continuously improve it – leads not only to bad customer experiences but also high levels of employee attrition and knife-fighting, which costs contact centers an arm and a leg in terms of agent re-recruitment, re-assessment, re-training, and first-aid.

Smart centers formally survey staff via a third-party surveying specialist at least twice a year to find out what agents like about the job, what they’d like to see change, and how likely they are to cut somebody or themselves.


For much more on these and other common contact center metrics, be sure to check out my FULL CONTACT ebook at http://www.offcenterinsight.com/full-contact-book.html.


 
In an effort to gain recognition and respect, too many struggling contact centers try to bite off more than they can chew – implementing performance goals that they have as much chance of meeting as I do of being crowned Miss America. 

I often encourage managers of poorly performing contact centers to stop reaching for the stars and to instead just concentrate on not sucking. You have to crawl before you can walk, and you have to walk before you can run a world-class operation.

With that in mind, below are some key performance objectives that managers of sub-par centers might want to consider implementing to help earn some quick wins, build some confidence among staff, and quit drinking so much in the morning.

Contact Resolution. Don't worry about first-contact resolution (FCR) right now. True, resolving customer issues on the first contact has a big impact on customer satisfaction, agent engagement and operational costs, but chances are your center just isn't yet ready to achieve a lofty FCR objective. Instead focus on a more feasible and less intimidating metric – fifth-contact resolution (5CR).

Studies have shown that it is easier to fully resolve customer issues on the fifth try than it is to do so on the first, second, third or fourth try. Research has also revealed that centers that are able to resolve customer issues within five contacts report higher customer satisfaction, agent retention and cost savings than do centers that don't resolve customer issues until the sixth, seventh or eighth contact.

Service Level.
Don't set your center and agents up for failure by shooting for an ambitious service level objective of answering 80 percent of calls within 20 seconds, or some similar challenging goal. It's much wiser to start out with the following, more palatable service level objective: 80% of calls answered… period. The number of seconds that it takes to do so should not be a major concern at this point – that will come later, assuming customers don’t burn your center to the ground in the meantime.  

Adherence to Schedule. Most contact centers focus too much on whether or not agents are in their seat at the right times. Your center will be much more likely to meet/exceed its adherence objective if you don't emphasize the "in your seat" and the "at the right times" parts so much. Instead, go a little easier on your staff by explaining the importance of them at least trying to stay within city limits during their shift. Agents will greatly appreciate the fact that you recognize how challenging and restrictive their job can be, and, as a result, will strive to meet the new objective you have set forth. Or not. 

Contact Quality. When it comes to assuring quality in struggling contact centers, the emphasis should be less on agents achieving high monitoring scores and more on whether or not the person rating the call throws up. When no vomiting occurs, be sure to praise the agent publicly, and consider grooming him or her for a supervisory role. If, however, vomiting does occur during a call evaluation – and it will – provide the agent with positive and nurturing pointers on how he or she could have made the interaction with the customer less nauseating to the person evaluating it.      

If you follow all the suggestions and recommendations provided here in this blog post, I guarantee that your contact center will move from being absolutely abysmal to being just a little pitiful in no time. Best of luck!


For performance measurement and management tactics that are even MORE practical than those highlighted here, be sure to check out my book,
Full Contact: Contact Center Practices & Strategies that Make an Impact.



 
Just because your call center surveys customers and occasionally even looks at the feedback they provide doesn’t mean you have a “Voice of the Customer” initiative in place. A true VOC program entails continuously and carefully analyzing customer ratings and sentiment, identifying trouble spots and trends, and taking decisive action before your customer base starts to hate you as much as your agents do.

If your call center is as serious about the customer experience as it is about low wages and bad lighting, then you need to make sure that your VOC initiative includes the following special components:   

Tools that report whether the customer was using their “inside voice” or their “outside voice.” Naturally, you want to pay attention to any customer who provides negative comments about a recent interaction, but for prioritization purposes it’s important to distinguish between customers who are merely a little frustrated and those who are considering hiring a hit man. By investing in speech analytics tools that detect customers’ emotion/volume levels during calls and survey responses, it becomes easier to determine which customers to ignore, which ones to call back within the week, and which ones to kidnap immediately before they ruin your brand via Twitter.
  
“Fist of the Customer” (FOC) software. Sometimes customers don’t verbalize exactly what they are feeling, thus it’s important to have tools in place that can dig deeper and uncover hidden sentiment. While still very much in the testing phase, FOC technology measures how forcefully frustrated customers throw their phones or punch their computers when interacting with an agent or IVR. Equipped with special motion-detection software that I’m too stupid to understand or explain, a typical FOC solution can be programmed to send an instant alert to the call center’s recovery team whenever a customer’s punch reaches a “Mike Tyson” or “Jerry Springer guest” level of force.

A “Last Word” option for agents.  To avoid having your customers’ negative and abusive comments adversely affect agent retention and morale, it’s important to incorporate a VOA (Voice of the Agent) component into your VOC program. After receiving a scathing rating or comment from a customer, agents will likely want to retaliate and get the last word in after they stop crying. Let them do so by providing them with what they think is the customer’s phone number but is really the number to a crisis hotline where operators are used to enduring profanity-laden diatribes from complete strangers.   


NOTE: If you found Greg’s “Voice of the Customer” recommendations to be insightful and valuable, you should consider seeking help from a licensed mental health professional. Contact Greg for referrals.