Off Center
 
When it comes to social customer care (providing service and support via social media channels), there are two key practices that contact centers must embrace: 1) monitoring; and 2) monitoring.

No, I haven’t been drinking, and no, there isn’t an echo embedded in my blog. The truth is, I didn’t actually repeat myself in the statement above.

Now, before you recommend that I seek inpatient mental health/substance abuse treatment, allow me to explain.


Monitoring in social customer care takes two distinctly different though equally important forms. The first entails the contact center monitoring the social landscape to see what’s being said to and about the brand (and then deciding who to engage with). The second entails the contact center’s Quality Assurance team/specialist monitoring agents' 'social' interactions to make sure the agents are engaging with the right people and providing the right responses.

The first type of monitoring is essentially a radar screen; the second type of monitoring is essentially a safety net. The first type picks up on which customers (or anti-customers) require attention and assistance; the second type makes sure the attention and assistance provided doesn’t suck.

Having a powerful social media monitoring tool that enables agents to quickly spot and respond to customers via Twitter and Facebook is great, but it doesn’t mean much if those agents, when responding…
  • misspell every other word
  • misuse or ignore most punctuation
  • provide incomplete – or completely incorrect – information
  • show about as much tact and empathy as a Kardashian.
  • fail to invite the customer to continue his/her verbal evisceration of the company and the agent offline and out of public view.
 
All of those scary bullet items above can be avoided – or at least minimized – when there’s a formal QA process in place for social media customer contacts. Now, if you’re thinking your QA and supervisory staff are too busy to carefully monitor and evaluate agents’ Twitter/Facebook interactions with customers (and provide follow-up coaching), then what the Zuckerberg are you thinking even offering such channels as contact options? I’ve said it before and I’ll say it again (and again, and again): If your contact center isn’t ready to monitor a particular contact channel, then it isn’t ready to HANDLE that channel.

Customers don’t applaud organizations for merely being progressive. If Toyota came out with a new automobile that ran on garbage but that had a 20% chance of exploding when you put the key in the ignition, customers’ response wouldn’t be, “Deadly, yes, but I might make it across the country on just banana peels!”

Social customer care is still new enough where organizations offering it are considered progressive. If your contact center is one such organization, are your customers applauding the strong and consistent social service and support your agents are providing, or is your center overlooking the quality component and losing too many customers to explosions?  

For more insights (and some irreverence) on Social Customer Care, be sure to check out my blog post, “Beginner’s Guide to Social Customer Care”. Also, my book, Full Contact, contains a chapter in which best (or at least pretty good) practices in Social Customer Care are covered.

 
In the eyes of many customers, self-service is not a compound word but rather a four-letter one. It’s not that there’s anything inherently bad about IVR or web self-service applications it’s that there’s something bad about most contact centers’ efforts to make such apps good.

Relatively few contact centers extend their quality assurance (QA) practices to self-service applications. Most centers tend to monitor and evaluate only those contacts that involve an interaction with a live agent – i.e., customer contacts in the form of live phone calls or email, chat or social media interactions. Meanwhile, no small percentage of customers try to complete transactions on their own via the IVR or online (or, more recently, via mobile apps) and end up tearing their hair out in the process. In fact, poorly designed and poorly looked-after self-service apps account for roughly 10% of all adult baldness, according to research I might one day conduct.

When contact center pros hear or read “QA”, they need to think not only “Quality Assurance” but also “Quality Automation.” The latter is very much part of the former.

To ensure that customers who go the self-service route have a positive experience and maintain their hair, the best contact centers frequently conduct comprehensive internal testing of IVR systems and online applications, regularly monitor customers' actual self-service interactions, and gather customer feedback on their experiences. Let's take a closer look at each of these critical practices.


Testing Self-Service Performance

Testing the IVR involves calling the contact center and interacting with the IVR system just as a customer would, only with much less groaning and swearing. Evaluate such things as menu logic, awkward silences, speech recognition performance and – to gauge the experience of callers that choose to opt out of the IVR – hold times and call-routing precision.    

Testing of web self-service apps is similar, but takes place online rather than via calls. Carefully check site and account security, the accuracy and relevance of FAQ responses, the performance of search engines, knowledge bases and automated agent bots. Resist the urge to try to see if you can get the automated bot to say dirty words. There’s no time for such shenanigans. Testing should also include evaluating how easy it is for customers to access personal accounts online and complete transactions.

Some of the richest and laziest contact centers have invested in products that automate the testing process. Today's powerful end-to-end IVR monitoring and diagnostic tools are able to dial in and navigate through an interactive voice transaction just as a real caller would, and can track and report on key quality and efficiency issues. Other centers achieve testing success by contracting with a third-party vendor that specializes in testing voice and web self-service systems and taking your money.


Monitoring Customers’ Self-Service Interactions

Advancements in quality monitoring technologies are making things easier for contact centers looking to spy on actual customers who attempt self-service transactions. All the major quality monitoring vendors provide customer interaction re­cording applications that capture how easy it is for callers to navigate the IVR and complete transactions without agent assistance, as well as how effectively such front-end systems route each call after the caller opts out to speak to an actual human being.

As for monitoring the online customer experience, top contact centers have taken advantage of multichannel customer interaction-recording solutions. Such solutions enable contact centers to find out first-hand such things as: how well customers navigate the website; what information they are looking for and how easy it is to find; what actions or issues lead most online customers to abandon their shopping carts; and what causes customers to call, email or request a chat session with an agent rather than continue to cry while attempting to serve themselves.

As with internal testing of self-service apps, some centers – rather than deploying advanced monitoring systems in-house – have contracted with a third-party specialist to conduct comprehensive monitoring of the customers' IVR and/or web self-service experiences.


Capturing the Customer Experience

In the end, the customer is the real judge of quality. As important as self-service testing and monitoring is, even more vital is asking customers directly just how bad their recent self-service experience was.

The best centers have a post-contact C-Sat survey process in place for self-service, just as they do for traditional phone, email and chat contacts. Typically, these center conduct said surveys via the same channel as the customer used to interact with the company. That is, customers who complete (or at least attempt to complete) a transaction via the center’s IVR system are invited to complete a concise automated survey via the IVR (immediately following their interaction). Those who served themselves via the company’s website are soon sent a web-based survey form via email. Customers, you see, like it when you pay attention to their channel preferences, and thus are more likely to complete surveys that show you’ve done just that. Calling a web self-service customer and asking them to compete a survey over the phone is akin to finding out somebody is vegetarian and then offering them a steak.      


It’s Your Call

Whether you decide to do self-service QA manually, invest in special technology, or contract with third-party specialists is entirely up to you and your organization. But if you don’t do any of these things and continue to ignore quality and the customer experience on the self-service side, don’t act surprised if your customers eventually start ignoring you – and start imploring others to do the same.  



 
True contact center success comes when organizations make the critical switch from a “Measure everything that moves” mindset to one of “Measure what matters most.” Given that we are now living in the Age of Customer Influence, “what matters most” is that which most increases the likelihood of the customer not telling the world how evil you are via Twitter.

No longer can companies coast on Average Handle Time (AHT) and Number of Calls Handled per Hour. Such metrics may have ruled the roost back when contact centers were back-office torture chambers, but the customer care landscape has since changed dramatically. Today, customers expect and demand service that is not only swift but stellar. A speedy response is appreciated, but only when it’s personalized, professional and accurate – and when what’s promised is actually carried out.

AHT and other straight productivity measurements still have a place in the contact center (e.g. for workforce management purposes as well as identifying workflow and training issues). However, in the best centers – those that understand that the customer experience is paramount – the focus is on a set of five far more qualitative and holistic metrics.

1) Service Level. How accessible your contact center is sets the tone for every customer interaction and determines how much vulgarity agents will have to endure on each call. Service level (SL) is still the ideal accessibility metric, revealing what percentage of calls (or chat sessions) were answered in “Y” seconds. A common example (but NOT an industry standard!) SL objective is 80/20.

The “X percent in Y seconds” attribute of SL is why it’s a more precise accessibility metric than its close cousin, Average Speed of Answer (ASA). ASA is a straight average, which can cause managers to make faulty assumptions about customers’ ability to reach an agent promptly. A reported ASA of, say, 30 seconds doesn’t mean that all or even most callers reached an agent in that time; many callers likely got connected more quickly while many others may not have reached an agent until after they perished.


2) First-Call Resolution (FCR). No other metric has as big an impact on customer satisfaction and costs (as well as agent morale) as FCR does. Research has shown that customer satisfaction (C-Sat) ratings will be 35-45 percent lower when a second call is made for the same issue.

Trouble is, accurately measuring FCR is something that can stump even the best and brightest scientists at NASA. (I discussed the complexity of FCR tracking in a previous post.) Still and all, contact centers must strive to gauge this critical metric as best they can and, more importantly, equip agents with the tools and techniques they need to drive continuous (and appropriate) FCR improvement.


3) Contact Quality and 4) C-Sat. Contact Quality and C-Sat are intrinsically linked – and in the best contact centers, so are the processes for measuring them. To get a true account of Quality, the customer’s perspective must be incorporated into the equation. Thus, in world-class customer care organizations, agents’ Quality scores are a combination of internal compliance results (as judged by internal QA monitoring staff using a formal evaluation form) and customer ratings (and berating) gleaned from post-contact transactional C-Sat surveys.

Through such a comprehensive approach to monitoring, the contact center gains a much more holistic view of Contact Quality than internal monitoring alone can while simultaneously capturing critical C-Sat data that can be used not only by the QA department but enterprise-wide, as well.


5) Employee Satisfaction (E-Sat). Those who shun E-Sat as a key metric because they see it as “soft” soon find that achieving customer loyalty and cost containment is hard. There is a direct and irrefutable correlation between how unhappy agents are and how miserable they make customers. Failure to keep tabs on E-Sat – and to take action to continuously improve it – leads not only to bad customer experiences but also high levels of employee attrition and knife-fighting, which costs contact centers an arm and a leg in terms of agent re-recruitment, re-assessment, re-training, and first-aid.

Smart centers formally survey staff via a third-party surveying specialist at least twice a year to find out what agents like about the job, what they’d like to see change, and how likely they are to cut somebody or themselves.


For much more on these and other common contact center metrics, be sure to check out my FULL CONTACT ebook at http://www.offcenterinsight.com/full-contact-book.html.


 
In an effort to gain recognition and respect, too many struggling contact centers try to bite off more than they can chew – implementing performance goals that they have as much chance of meeting as I do of being crowned Miss America. 

I often encourage managers of poorly performing contact centers to stop reaching for the stars and to instead just concentrate on not sucking. You have to crawl before you can walk, and you have to walk before you can run a world-class operation.

With that in mind, below are some key performance objectives that managers of sub-par centers might want to consider implementing to help earn some quick wins, build some confidence among staff, and quit drinking so much in the morning.

Contact Resolution. Don't worry about first-contact resolution (FCR) right now. True, resolving customer issues on the first contact has a big impact on customer satisfaction, agent engagement and operational costs, but chances are your center just isn't yet ready to achieve a lofty FCR objective. Instead focus on a more feasible and less intimidating metric – fifth-contact resolution (5CR).

Studies have shown that it is easier to fully resolve customer issues on the fifth try than it is to do so on the first, second, third or fourth try. Research has also revealed that centers that are able to resolve customer issues within five contacts report higher customer satisfaction, agent retention and cost savings than do centers that don't resolve customer issues until the sixth, seventh or eighth contact.

Service Level.
Don't set your center and agents up for failure by shooting for an ambitious service level objective of answering 80 percent of calls within 20 seconds, or some similar challenging goal. It's much wiser to start out with the following, more palatable service level objective: 80% of calls answered… period. The number of seconds that it takes to do so should not be a major concern at this point – that will come later, assuming customers don’t burn your center to the ground in the meantime.  

Adherence to Schedule. Most contact centers focus too much on whether or not agents are in their seat at the right times. Your center will be much more likely to meet/exceed its adherence objective if you don't emphasize the "in your seat" and the "at the right times" parts so much. Instead, go a little easier on your staff by explaining the importance of them at least trying to stay within city limits during their shift. Agents will greatly appreciate the fact that you recognize how challenging and restrictive their job can be, and, as a result, will strive to meet the new objective you have set forth. Or not. 

Contact Quality. When it comes to assuring quality in struggling contact centers, the emphasis should be less on agents achieving high monitoring scores and more on whether or not the person rating the call throws up. When no vomiting occurs, be sure to praise the agent publicly, and consider grooming him or her for a supervisory role. If, however, vomiting does occur during a call evaluation – and it will – provide the agent with positive and nurturing pointers on how he or she could have made the interaction with the customer less nauseating to the person evaluating it.      

If you follow all the suggestions and recommendations provided here in this blog post, I guarantee that your contact center will move from being absolutely abysmal to being just a little pitiful in no time. Best of luck!


For performance measurement and management tactics that are even MORE practical than those highlighted here, be sure to check out my book,
Full Contact: Contact Center Practices & Strategies that Make an Impact.



 
Last week in Part 1 of this post, I cited several quality monitoring practices commonly embraced by the world’s best contact centers, then stopped midway through in a desperate attempt to make you come back to my website this week.

Here we go with Part 2. I hope the wild anticipation didn’t cause you to lose too much sleep.
 
Incorporate customer satisfaction ratings and feedback into monitoring scores. Here is where quality monitoring is really changing. This shift in quality monitoring procedure is so important, it’s underlined here – and just missed getting typed out in ALL CAPS.

Quality is no longer viewed as a purely internal measure. Many contact centers have started incorporating a “Voice of the Customer” component into their quality monitoring programs – tying direct customer feedback from post-contact surveys into agents’ overall monitoring scores. The center’s internal QA staff rate agents only on the most objective call criteria and requirements – like whether or not the agent used the correct greeting, provided accurate product information, and didn’t call the customer a putz. That internal score typically accounts for anywhere from 40%-60% of the agent’s quality score, with the remaining points based on how badly the customer said they wanted to punch the agent following the interaction.


Add a self-monitoring component to the mix. The best contact centers usually give an agent the opportunity to express how much he or she stinks before the center goes and does it for them. Self-evaluation in monitoring is highly therapeutic and empowering. When you ask agents to rate their own performance before they are rated by a quality specialist (and the customer), it shows agents that the company values their input and experience, and it helps to soothe the sting of second- or third-party feedback, especially in instances when a call was truly flubbed.

Agents are typically quite critical of their own performance, often pointing out mistakes they made that QA staff might have otherwise overlooked. Of course, the intent of self-monitoring sessions is not to sit and watch as agents eviscerate themselves – as much fun as that can be – but rather to ensure that they understand their true strengths and where they might improve, as well as to make sure they and your quality personnel are on the same page. Self-evaluations should cease if agents begin to slap themselves during the process, unless it is an agent you yourself had been thinking about slapping anyway.


Provide positive coaching soon after the evaluated contact. Even if you incorporate all of the above tactics into your monitoring program, it will have little impact on overall quality, agent performance or the customer experience if agents don’t receive timely and positive coaching on what they did well and where they need to improve. Notice I said “timely” AND “positive” – this is no either/or scenario: Giving agents immediate feedback is great, but not if that feedback comes in the form of verbal abuse and a kick to the shin; by the same token, positive praise and constructive comments are wonderful, but not if the praise and comments refer to an agent-customer interaction that took place during the previous President’s administration.

At the end of each coaching session during which a key area for improvement is identified, the best centers typically have the coach and the agents work together to come up with a clear and concise action plan aimed at getting the agent up to speed. The action plan may call for the agent to receive additional one-on-one coaching/training offline, complete one or more e-learning modules, work with a peer mentor, and/or undergo a lobotomy.


Reward and recognize agents who consistently deliver high quality service.  While positive coaching is certainly critical, high-performing agents want more than just a couple pats on the back for consistently kicking butt on calls. Top contact centers realize they must reward quality to receive quality, thus most have some form of rewards and recognition tied directly to quality monitoring results. Agents in these centers can earn extra cash, gift certificates, preferred shifts and plenty of public recognition for achieving high ratings on all their monitored calls during a set month or quarter. In some centers, if an agent nails there quality score during an even longer period (six months or a year), they may earn a spot on the center’s “Wall of Fame”, and perhaps even the opportunity to serve as a quality coach who can boss around their inferior peers.

To foster a strong sense of teamwork and to motivate more than just a select few agents, many centers have built team rewards/recognition into the fold. Entire groups of agents – not just the center’s stars – can earn cash and kudos for consistently meeting and exceeding the team’s quality objective over a set period of time. Such collective, team-friendly incentives not only help drive high quality center-wide, they help protect the center’s elite agents from being bludgeoned with their own “#1 in Quality” trophy by co-workers.


If you have some other key quality monitoring practices you’d like to share, please do so in the comment box below. If you’d like to take serious issue with the practices I’ve highlighted, get your own blog.


 
Quality monitoring is as old a practice in contact centers as sending electric shocks through agents’ headsets to help keep handle time down. But just because centers have been conducting quality monitoring forever doesn’t mean they have been doing it right.

Effective quality monitoring is so important, I’m going to do two successive blog posts on the topic. This week and next my posts will highlight the quality monitoring tactics and strategies shared by contact centers that are better than yours. Here we go: 

Gain agent understanding of and buy-in to monitoring from the get-go. In top contact centers, managers introduce the concept of monitoring during the “job preview” phase of the hiring process. Agent candidates learn (or, if experienced, are reminded) of the reasons behind and value of monitoring, as well as how much monitoring will occur should they be offered and accept a job in the center. Managers clarify that monitoring isn’t intended to catch agents doing something wrong, it just often works out that way. They explain how monitoring is not only the best way to gauge an agent’s strengths and where they can improve, but also to pinpoint why the people who designed the center’s workflows and IVR system should be fired.

Gaining agent buy-in to monitoring goes beyond mere explanations and definitions. The best contact centers show new-hires and sometimes even job applicants how quality monitoring actually works by having them listen to recorded calls with a quality specialist. The specialist goes over the center’s monitoring form/criteria, shows how each call was rated, and lets the newbies decide on a fitting punishment for the agent evaluated. 


Use a dedicated quality monitoring team/specialist. In many contact centers, quality monitoring is carried out by busy frontline managers and supervisors. In the best contact centers, the process is carried out by dedicated quality assurance nerds – folks whose sole responsibility is making sure that the center’s agents and systems aren’t making customers nauseous.

I’m not saying that frontline managers/supervisors don’t know how to monitor;  rather I’m saying that they typically don’t have time to do so effectively and provide timely coaching. With a dedicated quality monitoring team (or, in smaller/less wealthy centers, a single quality specialist) in place, there is time to carefully evaluate several customer contacts per month for each agent, and to provide prompt and comprehensive feedback to those agents about why they should have stayed in school.  


Develop a comprehensive and fair monitoring form. A good quality monitoring form contains not only all of the criteria that drives the customer experience, but also all the company- and industry-based compliance items that keep your organization from facing any indictments.

In top contact centers, the monitoring form is broken into several key categories (e.g., Greeting, Accuracy, Professionalism/Courtesy, Efficiency, Resolution, etc.), with each category – and the specific criterion contained within – assigned a different weighting depending on its perceived impact on customer satisfaction. For example, “Agent provided accurate/relevant information” and “Agent tactfully attempted to up-sell after resolving customer issue” would likely be weighted more heavily than “Agent didn’t spit while saying ‘thank you for calling’" or “Agent remained conscious during after-call wrap-up”.

In developing an effective monitoring form that agents deem fair and objective, smart managers solicit agent input and recommendations regarding what criteria should or should not be included, and how agents feel each should be weighted. Showing agents such respect and esteem is a great way for you to foster engagement and a great way for me to make money if I ever write a book aimed at agents.


Invest in an automated quality monitoring system. There are contact centers that still rely mainly on real-time remote listening to evaluate agent-customer interactions. There are also doctors that still use leeches for bloodletting.

If your center is staffed with more than 20 agents and you want a shot at lasting customer satisfaction, continuous agent improvement, and an invitation to private vendor cocktail parties at conferences, you must invest in an automated quality monitoring system. There is simply no better and faster way to capture customer data, evaluate performance and spot key trends in caller behavior and agent incompetence.

I’m certainly not saying that other monitoring methods are not useful. Real-time remote observations, side-by-side live monitoring, mystery shopper calls, hiding beneath agents’ workstations – these are all excellent supplementary practices in any quality monitoring program. But they should do just that – supplement, not drive the program.


Monitor ALL contact channels, not just phone calls.  As a researcher, I’m always amazed by how many multichannel contact centers have formal monitoring process in place only for live agent phone calls. According a study by ICMI, fewer than two thirds of contact centers that handle email contacts monitor customer email transactions, and fewer than half of centers monitor customers’ interactions with IVR or web self-service applications.

By virtually ignoring quality outside of the of traditional phone channel, contact centers allow poor online and automated service to continue, creating a breeding ground for customer ire and high operating costs. Failure to monitor the email and chat channel will not only lead to agents’ errors and poor service going unnoticed, it can actually propagate bad service. Agents who see that the center is so focused on the phones but not on email or chat are likely to give it their all during customer calls but let quality slip a bit when tackling contacts via text. They may even use…gulp…emoticons. :0 

The best contact centers have a formal process in place for evaluating agents’ email and chat transcripts for information accuracy, grammar/spelling, professionalism, and contact resolution. In addition, these centers continually test their IVR- and web-based self-service apps to ensure optimal functionality, as well as monitor those apps during actual interactions to make sure that customers aren’t being thrown into IVR dungeons or abandoning web pages to rip the company a new one on Twitter.     


That’s it for Part 1. I’ll share several more key quality monitoring practices in Part 2 next week. If you simply cannot wait that long, you have no other choice but to purchase a copy of my ebook immediately: http://www.greglevin.com/full-contact-ebook.html.


 
Back in November I posted an “Ask the ‘Expert’” piece in which I answered the pressing questions of several call center professionals. While I have no proof whatsoever, I’m quite certain that my responses changed these managers’ lives and careers forever, and may have even altered the universal face of customer care as we know it.

But now that the damage has been contained, I think it’s safe for me to try again.    

     

Q: Our call center just recently started monitoring popular social media sites. What should we be responding to, and how?

A: I’m very pleased to see that your center has heeded the warnings made by social media experts that 100% of all call centers will soon be 100% Twitter-based. That’s an important step.

Social customer care is a lot like attending a cocktail party – there’s a whole lot of chitter-chatter going on but you really don’t need to stop drinking and listen unless somebody is talking about you. What your call center needs to pay particularly close attention to is strong negative comments about your company in general, your products, your customer service, or your SAT scores. It’s best to post an initial public response empathetically acknowledging the issue (as that shows everybody that your company is “listening” and cares), and then invite the person to discuss the problem in more detail privately via phone or chat, or face-to-face behind the trash dumpsters outside Wal-Mart.    

Don’t become so obsessed over putting out fires that you overlook the positive comments that customers post on social sites. Such unsolicited public praise and compliments are what foster widespread brand advocacy and help to keep your agents from drinking bleach on their break. Be sure to thank anybody and everybody for their kind remarks, even if you know that most are coming from your own Marketing department.


Q: We are struggling to gain agent buy-in to our quality monitoring program. Any advice on how to change agents’ opinion of monitoring and improve results?

A: Over my long career posing as a call center expert, I’ve answered that question numerous times. The fact that I’ve never heard back from anybody regarding my response to them leaves me to believe that my suggestions solved all their monitoring problems. Hopefully I can do the same for you.

First off, you need to view things from your agents’ perspective. They don’t like you or anybody else on the management team very much and don’t want any of you listening to their conversations. To help overcome their disdain for you, try loosening their ankle shackles and removing the barbed wire that lines their cubicles. Also, the next time they go over the center’s strict Average Handle Time objective for the day, flog them with a little less force than usual, or at least use a smaller club.

Once you’ve gained agents’ favor and trust, sit down with them and explain that you hate monitoring, too, but that it must be done to help protect against customers showing up in person with automatic weapons. When agents sense your empathy and see that quality monitoring is actually intended to help them, they are much more likely to accept it before they take another job two weeks later.

To really get agents to embrace quality monitoring and strive to continuously improve, you need to add a “voice of the customer” (VOC) component to your program. This entails incorporating customer satisfaction survey scores and feedback into agents' internal monitoring scores and post-contact coaching. Having a VOC-based quality program enables you to go to agents and say, “See, it’s not just me who thinks you’re incompetent.” THAT’S the type of 360-degree feedback that turns poor performers into highly mediocre ones, which is really all you can ask for considering what you pay your staff.    


Note: The views and recommendations that Greg has shared with you today are his own and are not necessarily representative of his views and recommendations tomorrow. He is very moody and unpredictable. Also, it’s weird that he’s referring to himself in the third person here.



 
Few processes in the contact center are as contentious as quality monitoring. When not carefully explained and carried out with tact and sensitivity, monitoring smacks of spying. Cries of “Big Brother” and micromanagement are not uncommon in such environments, resulting in agent burnout, attrition, and poisonous darts being shot at QA staff.

Several studies have revealed that call monitoring can cause significant stress and dissatisfaction among agents. In one study – conducted by Management Sciences Consulting, Bell Canada – 55% of all employees responded that some form of telephone monitoring added to the amount of stress they experienced with their job to a large or very large extent.

In order to achieve the level of agent engagement and customer advocacy that today’s contact centers seek, managers need to aim for agents to not only accept and tolerate quality monitoring, but to embrace it. You may ask, “What kind of freak actually looks forward to having their every word recorded and keystroke captured while on the job?” Well, I’m not saying that agents need to be so excited about monitoring that they beg for it or do a jig when they find out that they will have 10 calls a month evaluated. However, in the best contact centers I have seen, agents do look forward to being monitored and coached occasionally – because they recognize the positive impact it can have on their performance, the customer’s experience and the organization’s success.

So how do these contact centers get their staff to embrace quality monitoring rather than run in fear from it? Let me count the ways:

They educate new-hires on the reasons for – and value of – quality monitoring.  In leading contact centers, managers don’t just tell new agents that they’ll be monitored on a regular basis, they tell them why. In fact, many centers do this with agents even before they become agents – taking time to explain monitoring policies and practices (and the reasons behind them) during the hiring process so that applicants know exactly what to expect before they take the drug test.

When describing the center’s monitoring program, it’s a good idea to lie just a little to make it sound more rewarding than it actually is. Tell agents that monitoring is not used to “catch them” doing things wrong, even though you know that it usually works out that way. Explain that having calls evaluated enhances professional development, builds integrity and helps to ensure customer loyalty, even though you know that such things are true only if your agents care about themselves, the company, and the future – which isn’t likely the case, what with the world ending in 2012. But hey, it’s worth a shot.


They incorporate post-contact customer ratings and feedback into monitoring scores. For some reason, agents would rather have a customer than a supervisor tell them that they suck at providing service. That’s why the best contact centers have incorporated a “voice of the customer” (VOC) component into their quality monitoring programs – tying direct customer feedback from post-contact surveys into agents’ overall monitoring scores.

Adhering to the VOC-based quality monitoring model, the contact center’s internal QA staff rate agents only on the most objective call criteria and requirements – like whether or not the agent used the correct greeting, provided accurate product information, and didn’t call the customer a putz. That internal score typically accounts for anywhere from 40%-60% of the agent’s quality score, with the remaining points based on how badly the customer said they wanted to kiss or punch the agent following the interaction. 

 
They provide positive coaching. While incorporating direct customer feedback into monitoring scores is key, it won’t do much to get agents to embrace monitoring if the coaching that agents receive following an evaluated call is delivered in a highly negative manner.


During coaching sessions, the best coaches strive to point out as many positives about the interaction as they do areas needing improvement. This provides a nice balance to the evaluation and makes agents less likely to strike the coach with a blunt instrument. Even if the call was handled dreadfully, good coaches always find something positive to comment on, such as the agent’s consistent breathing throughout the interaction, or how well they were able to make words come out of their mouth. 


They empower agents to self-evaluate their customer interactions. There are few better ways to gain staff buy-in to quality monitoring/coaching than to trick agents into thinking that they have even the slightest bit of control during the process. The best contact centers always give agents the chance to rate their own call performance; the center then pretends to factor such self-evaluations into the overall quality score that’s recorded.

Many managers report that agents are often harder on themselves than the QA specialist or supervisor is when evaluating call performance. Sometimes, after listening to a call recording, agents become so upset by their own performance and/or the sound of their own voice that they try to physically harm themselves during the coaching session, which adds a nice touch of comic relief to an otherwise stressful situation for coaches.


They reward solid quality performance. Generally speaking, people are more likely to embrace an annoying or uncomfortable process if they know there is at least a chance for reward or positive recognition. I mean, if it weren’t for the free toothbrush, who would ever visit the dentist? And if it weren’t for the free alcohol, who would ever celebrate the holidays with family?

The same goes for quality monitoring. I’m not saying that you should give agents a free toothbrush and some alcohol after every call that is evaluated – just the ones where the agent didn’t make the customer or themselves cry. And remember, there are other ways to reward and recognize staff than with toothbrushes and alcohol, I just can’t think of any right now.


I’d love to hear about some of the ways that you and your center make quality monitoring more palatable to agents. Share your insight and experiences here by leaving a comment.