Why do managers need to measure things?9.2 Often when we ask managers why they measure things they tell us it’s because they need to know where they are, or so they can compare (benchmark) themselves to others. All too often performance measurement is seen as an activity (knowing or comparing) without purpose. We like to hear a second part of the sentence – ‘so that we can do something about it’. This is the essential difference between performance measurement and performance management. Performance reporting links the two together. 9.2.1 Performance measurement, reporting and management Performance measurement is simply about measuring things. Essentially it is the quantification of an input such as staff hours, or an output such as cost, or the level of activity of an event or process such as number of clients dealt with per day. Performance reporting is the way managers, staff or systems, report this information. It usually involves some sort of tabulation or graphic display, with some analysis as to how the measure performs against some agreed target or objective. Performance management, however, is about action. Based on the performance measure and its reporting, performance management is concerned with the actions taken which allow managers to control and improve their operations. Most organisations we have come across spend a lot of time measuring and reporting but precious little on management. There seems little point in doing the first two if the organisation is not going to do the third. 9.2.2 Purpose of measuring and managing performance Selecting the right measures for an operation is not easy. Indeed, many organisations have too many wrong measures; just because it can be measured doesn’t mean it should be measured. If you are going to measure something it should have a clear purpose and it should have systems or processes in place to support or achieve that purpose. These are two useful tests of a performance measure. Table 9.1 provides these two tests plus an additional eight that can be used to audit any performance measure. There are two main purposes or reasons to measure things in operations; control and improvement. Two secondary reasons which support both of these are communication and motivation.1 ● Control. One key purpose of performance measurement is to provide feedback so that action can be taken to keep a process in control (see Section 9.5). This requires a complete control loop, with measures, targets, a means of checking deviation, feedback mechanisms Table 9.1 Ten tests of a performance measure Purpose test Is there a clear reason for the measure? System test Is there a clear system to ensure the results will be acted upon to achieve the purpose? Truth test Does it measure what it is meant to measure? Focus test Does it measure only what it is meant to measure? Consistency test Is it consistent whenever or whoever measures it? Access test Are the results available and easily understood? Clarity test Is ambiguity possible in the interpretation of the results? Timeliness test Can and will the data be analysed quickly enough for appropriate action to be taken? Cost test Is it worth the cost of collecting and analysing the data? Gaming test Will the measure encourage any undesirable behaviours? Chapter 9 Measuring, controlling and managing 227 and means to take appropriate action if the process is not meeting the target. This may be used to ensure consistent performance within an organisation, such as costs within budget, and also across organisations, to ensure that government health and safety regulations or discrimination legislation, for example, is being met. ● Improvement. Performance measures can provide a powerful means of driving improvement (see Chapter 12). Often simply because something is measured, improvements will follow, as measuring implies it is important, and, as the saying goes, what gets measured, gets done. Further, by linking measures with rewards (such as bonuses) and/or punishments (such as no job), individuals can be motivated to improve their performance – assuming they have control over what is being measured (which is not always the case). Information about what pushes the process on or off target can also help individuals and organisations learn how to manage better the process involved. ● Communication. By measuring something the organisation is saying that it is important and needs to be controlled and/or improved. Conversely, by measuring everything (or lots of things) it is implying that nothing is important (amending the saying to what gets measured, just gets measured). A measure, or set of measures, therefore informs employees as to what the organisation requires them to strive for and indeed what they as an individual or a department may be accountable for. It is also an important means of communicating and implementing strategy (see Chapter 15). By measuring speed of response in answering telephone calls, for example, an organisation is saying this is important, and it is implied that employees are expected to strive to meet targets or improve the speed of answering. ● Motivation. The measure, or set of measures, used by an organisation creates a particular mindset that influences employees’ behaviour. If speed of response is measured but not the quality of the interaction, employees may find themselves, albeit subconsciously, compromising quality for speed. It is important therefore to have the right mix or balance of measures and also a set that supports the strategic intentions of the organisation (see Section 9.3 and Chapter 15). Some organisations also have to measure some things because it is required, for example by regulators or owners. While clearly this has to be done, managers should not confuse the two, i.e. the things they are required to measure versus the things they need to measure to control and improve their operations. 9.2.3 Systems to achieve the purpose Having established the purpose for any measure, the second test then is to check that there are systems or procedures in place to support the achievement of that purpose.We often find that although a manager purports that a certain measure is there to help improve the performance of the organisation, there is only a flimsy process, or none, in place to drive improvements. Similarly for the purposes of control, the vital part of the control loop that is frequently missing is action to put the process back on target. What needs to be measured?9.3 Just as companies compete on a wide range of dimensions, so organisations need to employ a range of measures, not purely financial or indeed operational. There are four main types of measures (see Figure 9.1), developmental, operational, external and financial (often referred to as the balanced scorecard).2 The first two are the determinants (or drivers) of success, the second two are the results, the outcomes of success.3 Developmental measures might include Part 4 Deliver228 staff satisfaction and staff turnover, the number of service innovations, and level of employee engagement in improvement teams, for example. Operational measures include equipment or staff availability, waiting times, throughput times, the number of customers per day, and number of faults or complaints. External measures include market share, customer satisfaction, and customer repurchase intentions. Financial measures include total costs, cost per customer, revenue per customer, and budget variance. It is generally accepted that organisations need to have a mix (or balanced scorecard) of these measures. There is little use in driving an organisation only by knowing what the results (financial and external data) are because there is no means of knowing what is determining those results. Conversely, driving an organisation by determinants alone (operational and developmental data) gives no understanding of the results of actions taken. Importantly, both determinants and results are needed to help managers understand the relationships between action and results, i.e. what changes they need to make (‘levers to pull’) to achieve the desired outcome. In this chapter we are going to focus primarily on external measures – i.e. how do we measure our success from the customer’s perspective – and internal operational measures – how do we measure, control and manage the operation? Before we focus on those two areas we want to cover one important question: what is driving the choice of the set of measures? 9.3.1 Linking measurement to strategy A key objective of performance measurement and management is that they should provide the link between day-to-day operations and the strategic intention of the organisation. Managers need to ensure that the measures are consistent with the organisation’s strategy. Organisations that can translate their strategy into their measurement system are far better to be able to execute their strategy because ● they can communicate their objectives and their targets ● they create a shared understanding of the organisation’s strategic intentions ● managers and employees can focus on the critical drivers ● investment, initiatives and actions are aligned with accomplishing strategic goals ● operations managers can communicate with and influence strategic decision makers. Based on their work with many clients around the world, performance management consultants 2GC (see www.2gc.co.uk) and Kaplan and Norton4 have found that one of the best ways of linking measurement to strategy is using strategic linkage models (or strategy maps). This approach, sometimes referred to as second generation scorecards, tries to ensure that the objectives of all the different parts of an organisation support higher-level objectives which in turn support the organisation’s strategy. Strategic linkage maps provide operations employees with ‘a clear line of sight into how their jobs are linked to the overall objectives of the organization, enabling them to work in a coordinated, collaborative fashion toward the company’s desired goals. The maps provide a Figure 9.1 Four types of measures Developmental Operational External Financial ResultsDeterminants Chapter 9 Measuring, controlling and managing 229 visual representation of a company’s critical objectives and the crucial relationships among them that drive organizational performance’.5 In strategic linkage models it is not the measures that cascade down the organisation through linked scorecards (i.e. the same things being measured at all levels in an organisation), but the objectives, leaving managers in the various parts of the organisation to develop their own and appropriate measures and targets to ensure they deliver their objectives. This approach not only helps to align all organisational and operational activities with strategy but also reduces the amount of unnecessary measurement and reporting that other measurement systems tend to encourage. In Figure 9.2 we provide a simplified example of a strategic linkage diagram for a firm of consultants. The example depicts some of the objectives (or rather shorthand versions of them) at just three levels in the organisation – head office, local office and the individual consultant – for the four main measurement areas associated with the balanced scorecard. The organisation’s strategy is contained in the oblong at the top of the diagram, the objectives are contained in the ovals, and the arrows show major linkages. Managers at all levels then need to devise appropriate measures and targets to meet their objectives. This objective-driven approach to performance measurement has emerged as best practice from years of development around the world, because strategy maps/strategic linkage diagrams ● are more flexible, and easier to develop, communicate, maintain and ‘cascade’ ● have been proven across a wide range of industries and functions. Figure 9.2 A strategic linkage diagram for a firm of consultants Operational Financial Development External The preferred supplier for a range of professional services through industry leadership and professional development to achieve secure and profitable growth Risk management Industry leadership in selected areas Quality and efficiency Be preferred and reliable supplier Headoffice Risk reduction Profitable growth Industry expertise Building selected relationships Localoffice Client satisfaction and retention Staff retention and development Quality and efficiency Risk assessment Be profitable for the firm Network development Industry expertise Individuals Experience Quality and efficiency Attract, retain and develop proactive and intelligent people Profitable growth Part 4 Deliver230 External data is important to allow operations managers to know how effective their actions are, yet often the external and internal measures they use to assess their processes miss out on a customer perspective. An interesting question to ask is: looking at the set of measures used by an operation, would its customers measure its performance in the same way? Operations managers too easily fall prey to inside-out thinking and see their measures from an internal, operations perspective. As a result measuring what is important to customers can easily be overlooked. Although operations managers may well measure customer satisfaction, they may ignore, or overlook, detailed measures of performance that are important to their customers and thus concentrate on the more comfortable and familiar operations measure of performance. Jan Carlzon, for example, when he was chief executive of SAS, noted that its cargo operations were measuring the wrong things: We had caught ourselves in one of the most basic mistakes a service-oriented business can make [their cargo customers wanted prompt and precise cargo delivery] . . . yet we were measuring volume and whether the paperwork and packages got separated en route. In fact, a shipment could arrive four days later than promised without being recorded as delayed.6 A telecommunications company measured the number of orders (such as requests for new telephone lines from customers) completed within eight working days. This totally ignored the needs of customers who urgently required additional telephone capacity or, at the other extreme, customers who were not in a hurry because they were moving house in three weeks’ time. The company’s procedure of scheduling jobs on a first come, first served basis managed to upset most of their customers. Changing the measure to the percentage of orders completed within a two-hour timescale agreed with the customer not only greatly increased customer satisfaction, but also reduced the number of calls made by customers to chase up the supply. It also reduced the cost of the operation and increased its efficiency. Indeed the company found that the average time required by customers was two days in excess of their own eight-day target, so the customer advisors had more time available to them to schedule the appointments. 9.4.1 Measuring customer satisfaction Customer satisfaction is one frequently used external measure in service organisations. It is usually assessed in a structured way using questionnaires and surveys or mystery shoppers. Questionnaires and surveys can be constructed using the eighteen quality factors discussed in Chapter 5, or factors that customers have identified as being important in focus groups etc. Questions are then constructed for each factor, and customers are asked to rate their answer on a scale, for example 1 to 5. The questions can be used to assess the customer’s level of satisfaction with the various touch points in the customer’s journey, their overall satisfaction with the service, the level of various emotions they felt such as trust or informed, and their intentions, such as willingness to return or recommend. One of the best-known instruments for assessing customer perceived service quality is SERVQUAL. SERVQUAL is a concise multiple-item scale questionnaire that organisations can use to assess their customers’ expectations and perceptions of their service and obtain a single customer satisfaction score for tracking and comparison. The instrument itself is a skeleton questionnaire that asks questions of customers about their expectations and perceptions of the services of a particular organisation. It uses five consolidated quality factors or dimensions (assurance, empathy, reliability, responsiveness, tangibles) with 22 items for perceptions and 22 for expectations, using a seven-point Likert scale. A gap score (perceptions minus How can managers measure the customer’s perspective?9.4 Chapter 9 Measuring, controlling and managing 231 expectations) is then calculated for each pair of perception and expectation statements. The total of the gap scores is the SERVQUAL score. The gap scores can also be weighted by getting customers to add weights to each dimension. Repeated administration allows an understanding as to how customers’ perceived service quality with each of the dimensions is changing over time. This instrument provides a direct measure of satisfaction i.e. perceptions minus expectations (see also Chapter 5). Mystery shoppers are used by many organisations, in particular retailers, to assess the service that their customers experience. The mystery shoppers can be managers acting incognito but are usually provided by external agencies and work to an agreed scoring system. The problem with this method is that the items, or questions, for scoring have often been developed by managers (inside-out) rather than covering things that are important to customers. Performed in this way they are more like an operational audit, checking that staff provide the right information and stick to the correct forms of address and scripts, rather than dealing with issues that are more important to customers such as solving their problems and being helpful. One particular bank, for example, uses mystery shoppers to check that their staff call their customers by their title (such as Mr Johnston) and try to sell them the product of the week (such as a house loan). However, the bank staff may know the customer well and be on first-name terms, and may know he already has a house loan and is coming to the bank to pay in some money quickly. Calling him Bob and not drawing his attention to the offer of the week would mean they would be scored badly by an incognito mystery shopper overhearing the conversation. Other more qualitative approaches to understanding and assessing, rather than measuring, customer satisfaction were covered in Chapter 5, including ● focus groups ● customer advisory panels ● complaint/compliment analysis ● critical incident technique ● sequential incident analysis. These provide more anecdotal information about customer satisfaction and importantly provide insights about what the organisation might need to do to improve its service. One additional important benefit of collecting qualitative, anecdotal data is that senior managers are sometimes more driven to action (and providing resources) when they see the verbatim and often colourful and forceful comments of their customers than when given a numeric overall satisfaction score of, say, 4.2 out of 5. 9.4.2 Problems in measuring customer satisfaction Collecting information about customer satisfaction is something that many organisations now do. However, all too often there are problems with the instrument and, more importantly, problems in the use of the data.7 Problems with the instrument There are several common problems with customer satisfaction instruments: ● Changing questions. One of the most important benefits of collecting satisfaction data is to track trends over time.However,frequently changing the questions undermines this benefit. ● Too many questions. There is often a tendency by researchers and market analysts to ask every conceivable question. While it might be helpful to know about every aspect of the service, long questionnaires usually result in poor returns. ● Missing the point. The reason for measuring satisfaction is often to find out the impact of customer satisfaction, yet this is often missed out of satisfaction instruments; for example, will customers return, will they provide positive word-of-mouth? Part 4 Deliver232 ● Qualitative versus quantitative. While a survey may be able to tell you that a service has scored 4.2, it will not help you understand what to change or how to change it. Likewise, focus groups may give you some ideas about what to do, but it will not be possible to assess the changes without quantitative data. There needs to be a combination of both qualitative and quantitative assessment of customer satisfaction. ● Survey-weary customers. Since so many organisations wish to know about our satisfaction, we often feel disinclined to complete questionnaires, especially if we have no confidence that the organisation will make the changes suggested or deal with the issues identified. ● Analysis fodder. In cases where respondents are not asked for any personal views but have to tick a lot of boxes about an organisation and its services, customers feel that they are just being used to feed a data engine, either to keep analysts busy or to help the organisation justify doing whatever it wishes. Problems in the use of the data ● Resource hunger. Many organisations consume large amounts of resource in collecting, coding and analysing satisfaction data, and writing reports with lots of graphs and tables. ● Lack of impact. Yet, critically, the measures of customer satisfaction, the analysis and other reports may lead to no or little action. If the data is not actively used to control the operational processes that are meant to deliver satisfaction or improve them, there is little point in collecting and analysing the data in the first place. ● Satisfaction versus success. High satisfaction scores do not necessarily lead to organisational success. Just because customers say they are very satisfied does not mean that they are valuable customers (for example, that they provide a profitable revenue stream or support for the organisation, or that they use the organisation frequently). Indeed, customers can be highly satisfied with, say, a restaurant, but still go somewhere else next time. ● Openness to manipulation. When organisations link the satisfaction measures to employee reward systems, people may manipulate the system to ensure a beneficial result. In sum, a lot of customer satisfaction assessment is a waste of time and effort; it drives no discernable improvement, consumes valuable organisational resources and wastes customers’ time. There are, however, some exceptions. In Case Example 9.1 we describe how the RAC goes about measuring customer satisfaction and what it does with the data. Nigel Paget was the RAC’s operations director. He explained the importance of measuring customer satisfaction: Customer satisfaction is absolutely the king here. Each patrol hands out a customer satisfaction card at the end of every breakdown and every month the patrols get their customer satisfaction index (CSI) for the customers they have dealt with. We not only measure the patrols’ satisfaction rating but we also measure their personal response rate as a means of ensuring that they hand out the forms. We also measure all the other Case Example 9.1 The RAC – customer satisfaction is king Source: RAC Chapter 9 Measuring, controlling and managing 233 9.4.3 Qualitative or quantitative? One interesting question remains. Which is the more appropriate way to assess customer satisfaction, quantitative methods such as questionnaires or qualitative methods such as focus groups? One way to answer this question is to ask how individual customers come to a view about their satisfaction of a service. There are two different answers to this question: a rational approach and an incident-based approach. ● The rational approach. The rational approach would suggest that customers consciously or unconsciously use a weighted average, so that a high score on one attribute or factor may offset a low score on another to arrive at a rational evaluation of the quality of a service. Indeed many satisfaction surveys, such as SERVQUAL, are based on the assumption that a reasonable way of calculating overall satisfaction is by allocating weights to the various factors of transactions (according to importance as perceived by the customer), multiplying the weight by the score (on a 1–5 scale for example) for each factor, and then cumulating them into an overall satisfaction rating. ● The incident approach. An alternative view is that customers are less rational and react more to individual incidents. So any single incident – delighting or dissatisfying – could, despite the remaining adequate and satisfying transactions, result in a feeling of overall dissatisfaction or delight (see Chapter 5). The reality is likely to be some combination of these approaches, which means it is important to take care when constructing algorithms to assess customers’ overall satisfaction things that are important to our customers, such as technical ability, their fix rate and also if they solved the problem. For example, if they can’t actually fix the fault, did they solve the problem for the customer, such as take the car to the dealer up the road? We want the patrol to take responsibility and go the extra distance to sort out the problem. Again it is all benchmarked and they receive a bonus for excellent performance. Around 40,000 customer satisfaction cards are returned a month – that’s about 400–500 per person. We know exactly what the customers the patrols served in the previous month actually thought of them. We compare this to their previous performance and to an aggregate score for everyone. We reward people as a result, and so if you are average you get your salary. If you perform above average, you get rewarded on top of that. So the incentive is to be better. Around 15–20 per cent will get the top bonus, which equates to about 10 per cent of salary, but it’s on a sliding scale. Despite the central importance of customer satisfaction and the pressure on an individual’s performance, the measurement of satisfaction is seen as a positive that is valued by staff. Managers recognise that sometimes there are customers who do not necessarily answer the questions with integrity, and that sometimes not every interaction with a customer is going to be perfect because someone is a trainee or is just having a bad day. So an individual’s measures are compared not only with their previous performance but also with the average performance for everyone else. Nigel added: They understand that one bad customer report out of 400 in a month is not going to have a disproportionate effect, but 30 or 40 out of 400 will. I find it hugely encouraging that when I sit down with people they will just ask me if I have any ideas as to how they might improve their CSI because they simply want to do better. Source: This illustration is an extract of a case commissioned by the Institute of Customer Service as part of a study into service excellence. The author gratefully acknowledges the sponsorship provided by Britannic Assurance, FirstGroup, Lloyds TSB, RAC Motoring Services and Vodafone. Part 4 Deliver234 with a service. It is certainly a mistake to assume that customers can identify with precision the reasons why they are satisfied or dissatisfied with a service. For example, on a training programme participants complained about the standard of the accommodation. It was only in discussion with the group that it emerged that the underlying dissatisfaction was with one of the presenters and that the accommodation, if not wonderful, was in fact satisfactory. The best way to assess customer satisfaction is by both qualitative and quantitative means. A qualitative answer (a number) allows satisfaction to be tracked and compared to other organisations or parts of the same organisation. The qualitative data helps to answer questions about what went wrong and what needs to be changed. There are a wide range of measures that can be used to control and manage an operation, such as cost measures (for example cost per customer, labour cost per day), quality measures (for example number of errors, number of complaints) and time (for example time to serve or process a customer, waiting times and on-time provision of service, for example). Which are the right measures depends upon the strategy of the organisation and the objectives set for the operation (see earlier in this chapter), legal or statutory requirements, and the measures that operations managers need to make sure they are delivering the service, taking a customer’s view about what’s important. Once those measures are known and agreed they need to be measured, reported and used to control or improve the operation. In this section we will focus on reporting measures and controlling operations using the measures. (We will cover improvement in Chapter 12.) But first operations managers also need to understand the impact of one measure on another. 9.5.1 The relationships between measures Some organisations are now trying to understand the relationship between the various measures, i.e. understand how a change in one area of performance might impact on another. This is sometimes referred to as ‘interlinking’.8 There is often a danger that focusing on one area and one measure may have a detrimental effect on another. For example a contact centre manager who focuses on speed of response as a key measure of the unit’s performance may have a detrimental impact on call quality and on customer satisfaction. Such a focus may also lead to an increase in workload if many repeat calls are required by customers to obtain all the information they need. By using knowledge about the relationships between developmental, operational, external and financial performance measures, organisations can become systematically smarter. Managers will begin to understand, with greater certainty, the likely effect of making resource decisions, which helps them set appropriate targets and better support the strategic intentions of their organisation. Indeed, some leading-edge organisations are beginning to understand and exploit these relationships to create a business case for service (we will develop this point in Chapter 17). In Case Example 9.2 Sean Guilliam, the head of Lombard Direct’s call centre, explains how his operation is taking the first steps towards understanding the relationships between operational, external, developmental and financial measures. How can managers measure, control and manage the operation? 9.5 Chapter 9 Measuring, controlling and managing 235 Lombard Direct must have one of the best-known telephone numbers in the UK, 0800 2 15000, which is based on their slogan ‘loans from 800 to 15,000 pounds’. Lombard Direct is a subsidiary of Lombard Bank, part of the National Westminster Bank group. Unsecured loans over the telephone constitute about 90 per cent of the company’s business, other products including insurance on loans, house, contents and motor insurance, savings and a credit card. The main call centre, in Rotherham, South Yorkshire, is a 24-hour operation that operates every day of the year. The centre handles over 2 million calls a year. Monday is a typically busy day, when around 6,000–7,000 calls are received. The call centre has around 200 seats (for the customer advisers – CAs) and employs around 250 full-time equivalent staff, with a large contingent of part-timers. Callers are asked a number of questions to rate their creditworthiness and are allocated into a band. This risk assessment, together with the size of the borrowing requested, determines the rate of interest to be charged. Sean Guilliam is the head of the call centre and he judges the performance of its CAs on six key performance measures. He explains: We use the following measures: ● Telephone availability – the time an individual is available to take calls. ● Insurance sales – because we want to encourage the people who take out loans with us to take out our insurance cover on the loans. ● Media and product code accuracy – it is very important for our marketing people to know where the customers heard of us. However, our systems are a bit lacking in this area and sometimes the CAs have difficulty finding the right code – there are so many! ● Call conversion – where we calculate the number of successful loans sold compared to the number of calls taken. ● CATS (Customer Adviser Technical Skills) – procedural accuracy, such as giving the right advice and adhering to data protection requirements. ● Call analysis – an assessment of the interactions with a customer and compliance with the correct procedure. We have four ‘spot’ levels and CAs are reviewed every three months. Each level has a set of criteria based on the six key measures. If someone attains a higher level for two assessments they go up one spot level; if they perform less well over three periods they will go down. Each level is worth about an extra £1 per hour, so it is quite significant. Also they need to get to Level 2 before we will offer them a permanent contract, though I think we need to remove this barrier and put everyone on permanent from the start to bring us in line with the industry. At a call centre level I also monitor loan volumes, utilisation, talk time, service levels and abandon rates. Service level refers to the percentage of calls answered within 10 seconds. Utilisation is total talk time divided by total pay time (including training time and maternity, for example). Talk time is the time each operator spends talking to customers. When you compare this to telephone availability you have to be careful. Yes, you want high productivity, i.e. lots of talk time when available, but too much talk time could Case Example 9.2 Lombard Direct Source: Advertising Archives Part 4 Deliver236 9.5.2 Performance reporting Many organisations have taken performance reporting to an extreme level by producing thick reporting documents with pages of detailed tabulations and colourful charts which are meaningless to all but those who created them. The solution is to know which measures are important (usually around 4 to 6), measure them carefully and report them simply, but report them with a view to making changes, not just for the sake of reporting them. A good way we have found in some organisations is by using a simple but visual report form for each measure. Figure 9.3 shows a display for a single (important) performance measure – errors in a process – which includes four quadrants. Rather than simply showing the data (in this case percentage errors) for this month (February), the chart (top left) provides a clear view of the trend and the associated target, thus allowing changes over time to be seen. The top right quadrant provides an analysis of February’s data to identify the most frequent source of errors. As a result of the analysis, the bottom right quadrant reports on the actions to be taken to try to deal with the most common errors, and on who will be responsible for taking the action and by when they should report. The final quadrant provides an implementation record that checks the impact of previous action plans: who was supposed to do what, by when, and the effect that it had. The chart also, and importantly, displays the purpose or objective of the measure in the centre, with the person responsible for the measure and follow-up actions top right.We also recommend that if there is no one responsible or no clear objective then the measure and its report should be scrapped! We would suggest that performance measurement reports should include only a small number of key measures and that for each measure there should be a display of ● the purpose/objective ● the person responsible ● trends over time ● performance against target ● supporting data and analysis indicate either we need more staff because operators could be busy and we could be losing calls, or an individual spends too much time talking to customers. Similarly, when I compare loan conversions and insurance sales, although we want a good ratio of insurance sales to loans, too high a ratio might mean that staff could be doing too hard a sell. We don’t want customers put off from using us again. The problem is in balancing flexibility with control! Especially when a 1 per cent increase in insurance sales can contribute a quarter of a million to the bottom line. One of the big problems in staff scheduling is that call volumes are partly dependent upon marketing spend. And, just to make things interesting, volumes are also affected, as you might expect, by weather, holidays and sporting events, for example. We use the volume expectations from marketing spend to create a volume forecast, we then pro-rata this to forecast the volumes of calls we expect individuals to be dealing with: this determines the number of CAs I need and therefore the costs of the operation. I also monitor ‘people measures’ such as attrition, absenteeism and staff morale. It can be all too easy to tradeoff volumes for morale. We have a great atmosphere here and morale is very high. To help my planning we have created a correlation model that has looked at the relationships between volumes, utilisation, service levels, abandon rates, costs and ‘people measures’. I can see the effect of a change in volume on all my key statistics. I want to get high utilisation, high service levels, low abandon rates, low costs and high morale. When we look at our performance data we are now trying to look across the rows and not up and down. It’s a new development but it’s about how things link together. It helps us understand the relationships between the key variables and also helps us ask the right questions. Chapter 9 Measuring, controlling and managing 237 ● identification of causes/problems ● action to be taken, by whom and by when ● an assessment of action taken. 9.5.3 Controlling performance A key operational performance objective is to achieve consistency of outcome for customers, i.e. delivering to the specification. Most service organisations report that reliability is one of the most significant factors in influencing customer satisfaction – in other words,‘saying what you do and doing what you say’. This section considers three aspects of control: setting the targets, assessing the capability of a process, and the role of quality systems, such as ISO 9000. Target setting While not all measures will have targets associated with them, targets can be a useful means to aid performance management by controlling performance, judging improvements, motivating employees and communicating the speed and size of the change required. Indeed, target setting is a key element of driving performance improvement. There is evidence to suggest that performance improves when clear, defined, quantitative targets are provided.9 There is, however, an alternative view that suggests that some measures and targets may lead to Figure 9.3 Reporting performance Source: Adapted from work by Neely, Andy D. (1998), Measuring Business Performance, The Economist Books, London, and Carole Driver, Plymouth Business School. ERROR REPORT Sarah Richardson, 20th February 180 80 100 120 140 160 0 20 40 60 Errors Results Target Jun JulAug Sep O ct N ov Dec Jan Feb 20 30 40 50 Percentage 0 1 2 3 4 5 10 Error type Objective to eliminate errors in the process Analysis 1: Environmental risk 2: Settlement risk 3: Financial analysis 4: Data input 5: Insurance data 27 36 14 3 0 34 45 18 4 0 Type errors % Action plan 2/1 1 Distribute checklist Debug software No action No action No action JMP FAB March April 3 4 5 AHR JuneTraining sessions2 Error action resp target Implementation record Dec DT 50% reduction in type 3 errors Jan PA W Type 5 errors eliminated Feb DT Increase in type 1 due to s/w faults Fin. training .for advisors Auto date validation Validation software Aide mémoires Mar JMP Action date resp impact Part 4 Deliver238 ‘gaming’, that is, playing the system just to meet the target. One museum targeted to increase its number of visitors started counting not only visitors but also delivery personnel just to try and reach its targets. As John Seddon stated, ‘Targets drive people to use their ingenuity to meet the target, not improve performance’.10 Operations managers need to decide carefully how targets will be set for their measures to control the process, or drive process improvement, and/or motivate the staff. There are essentially three types of target, or benchmarks, against which performance can be compared: internal, external and absolute (see Figure 9.4). (We will cover benchmarking in more detail in Chapter 14.) ● Internal targets. Internal targets may be based upon the past performance of the process under consideration (process-based). The target is usually similar to the previous period’s target, or slightly greater or lower in order to drive gradual improvements in the process. The key disadvantage of using the process itself as the base for comparison is that, while undoubtedly encouraging improvements in performance, it only provides information as to whether the operation is getting better over time rather than whether performance is satisfactory.11 The targets may be based upon the performance of other similar internal processes (other-process-based). This encourages comparisons across processes and the sharing of practices between them to try to meet the performance of the best. Comparison with other internal processes has the additional advantage that it provides a relative position for each process within the organisation. ● External targets. External targets are based upon comparison with other organisations, using either competitor-based targets and/or ‘best-in-field’ benchmarks. Competitorbased targets are based on the performance of similar operations in other similar, competing organisations. Best-in-field benchmarks are based upon the performance achieved by organisations that may or may not be in the same industry but where the performance is considered to be outstanding. An important, though often overlooked, external target base for service operations is customer-based targets (just as customer-based measures, too, are easily overlooked), i.e. for a particular activity, what level of service do customers consider to be appropriate? ● Absolute targets. Some processes need to be operated with absolutely no defects or 100 per cent adherence to standard. It is unacceptable for life-support machines or stock-market computers or national defence systems to fail; although they do occasionally fail, with serious consequences, their operational targets are absolute. ● Stretch targets. A critical question to ask is: by how much should the target be above the current level of performance? Essentially, this depends upon the size of the change Figure 9.4 Three types of targets Absolute Other-process-based Customer-basedProcess-based Best-in-field-based Competitor-based Internal External Targets Chapter 9 Measuring, controlling and managing 239 in performance required, on the assumption that it is feasible and desirable that such a change can be made (see also Chapter 12). Internal targets are appropriate for operations wishing to improve their performance continually and incrementally. This would target performance improvements relative to their historical achievements. Often organisations using a continuous improvement strategy, or kaizen (see Chapter 12), tend to be both successful and competitive: they may have already outperformed competitors or be the best-practice leader focusing on building upon their existing strengths. Organisations undertaking radical change of a process should set stretch targets. These are likely to be based on external benchmarks because of the need to improve performance dramatically in relation to that of competitors or external comparators. Reference to external sources for targets, such as competitors, brings both legitimacy and a sense of urgency to those faced with the need for radical change. ● Employee involvement in target setting. To motivate employees to try to reach a target level of performance it is essential that they have some control over the variables that affect the performance, and also it helps if they have had a role in negotiating what that target would be, i.e. what they think is achievable. This is what one would expect to find for all processes undergoing continuous, kaizen-type, improvement as employee involvement and participation are central to the philosophy of kaizen. This approach encourages employees to address questions such as How can you improve what you are doing? How can you improve the process by which you are doing it? How can you improve the way in which you interact with other people? This in turn requires the encouragement, support and authority (empowerment) to propose and implement these improvements, backed by a supportive organisational culture and a ‘team’ approach to problem-solving and improvement.12 Because of this philosophy of empowerment, participation and involvement, where the responsibility for process improvement rests with employees rather than quality specialists for example, targets should be set through a process involving employees. The employees should decide what might be achievable over a period of time, as it is they who have the responsibility for change and the authority to carry it out. For organisations undergoing more radical change targets may be imposed by the senior managers overseeing the change programmes on a command-and-control basis. In radical change programmes, therefore, overall responsibility may rest with senior management champions who devote a substantial amount of their time and effort to both the design and implementation of process change. ● Linking targets to rewards. Organisations need to decide what rewards/penalties will be associated with the achievement of their chosen targets. If rewards linked to targets are to work as intended, they must be clearly perceived as sufficient to justify the additional effort to obtain them, directly related to the required performance, and perceived as equitable, and must take into account the complexities of individual versus team-based effort.13 In addition, the reward structure must also be accompanied by appropriate feedback mechanisms.14 Rewards take a variety of forms, from purely financial to a mixture of financial and non-financial, such as achievement awards and other forms of recognition. To be effective the rewards need to be tailored to the specific requirements of the performance improvement programmes in use within an organisation.While we would expect to find financially based rewards applied in all forms of change programmes, we would suggest that nonfinancial, and therefore less threatening and more encouraging, forms of reward would be used to promote continuous change. It has been contended that continuous improvement strategies require ‘reward systems that place greater emphasis on quality and team-based performance’15 since they are specifically concerned with the motivation of employees and Part 4 Deliver240 the elimination of the fear of job losses. Processes undergoing continuous change should therefore base their rewards on a mix of financial and non-financial rewards targeted at encouraging improvements in team-based performance. In contrast, radical change strategies emphasise individual performance, so the performance measurement system should measure the location of specific results and individual employee performance. Given the higher costs and risks associated with step-change improvements, we would expect rewards associated with such changes to tend to be primarily financial in nature. Capable processes The quality management concept of building capable processes is helpful here. This is a fundamental principle of quality management and is at the heart of the Deming philosophy, requiring ‘evidence that quality is built in’.16 Many service operations utilise the statistical process control (SPC) methodology to assess the extent to which a process is capable, or in control. Figure 9.5 shows the distribution of sample means measuring the performance of two hotels (A and B), which deliver breakfast trays to guests’ rooms. In each case, the hotel offers guests a choice of times for delivery of their breakfast tray. Both hotels have chosen 10-minute ‘windows’ in the belief that this is what customers require. Figure 9.5 shows the distribution of the breakfast tray delivery times for a particular 10-minute window. Hotel A has put in place the processes and capacity to ensure that it consistently keeps its promises, whereas Hotel B appears unable to do so. The former is an example of a capable process whereas the latter is a process out of control. If the promise of meeting this time window is a key element of the ‘contract’ between provider and customer, the customer satisfaction ratings for Hotel B will be under threat. Hotel B has two basic strategies that its management might consider: ● to invest in the delivery process to ensure that it can meet its process specification consistently ● to relax the process specification, in this case increasing the duration of the ‘time window’ offered to guests (perhaps 20 minutes instead of the current 10 minutes). Of course, the decision as to which to implement can only be made once customer research has indicated how important this issue is and what time window is appropriate. SPC is based on the production of process control charts. It is normal practice to take a series of measurements and then to plot the mean of the sample readings. This is because Figure 9.5 A capable process and an out-of-control process 7.00 am 7.10 am 7.00 am 7.10 am Hotel A Hotel B Frequency Frequency Chapter 9 Measuring, controlling and managing 241 under the central limit theorem, the distribution of the sample means tends towards a normal distribution, even if the underlying distribution is not normal. Processes can be plotted onto a control chart, such as Figure 9.6, to give a visual picture of their state of health. Figure 9.6 shows a plot of sample means taken at random, such as depicting the times breakfast was delivered in the hotel from the previous example. The chart shows the process mean (x–); warning limits set at ± two standard deviations (2σ); and action limits at ± three standard deviations (3σ). The value of process control charts lies in the removal of a temptation to ‘meddle’ in the process. A number of readings (5 per cent) will be expected to lie between the warning and action limits. The general advice if a reading is taken in this zone is to take another reading before doing anything. Premature adjustments may take the process out of control. Figure 9.6 shows a process initially in control (the first six readings are a normal pattern), but then showing signs that the process mean has shifted as a run of points are all moving in the same direction as opposed to the normal scattering of readings. As process managers spend time understanding these processes, it is frequently possible to identify causes of variation. These can be divided into those causes that may be avoided, perhaps through automation or better training, and those that are unavoidable. An example of the latter might be the impact of bad weather on a breakdown service. SPC has been used extensively to control and improve ‘runners’ in high-volume, standard processes. Examples include: ● accuracy of cheque transactions in a major retail bank ● computer service response times ● sickness and absenteeism in a contact centre ● numbers of customer complaints per thousand transactions. It would be wrong to give the impression that SPC is easily applied to all service processes. It is clearly applicable to factory-like processes where measures such as response times may be accurately assessed, but it is also valuable to apply the technique to attributes such as the number and intensity of customer complaints. As is often the case, the value may come from the discipline of thinking about the process as much as from the monitoring of the control chart. Figure 9.6 Statistical process control chart 3σ 2σ x 2σ 3σ Action limit Warning limit Warning limit Action limit Time – Part 4 Deliver242 Quality systems Some industries have had a long history of quality assurance, usually for reasons of health, hygiene or safety. Many manufacturing companies have been required to produce evidence of quality plans, schedules of inspection and records of quality checks being carried out. This activity has frequently been viewed somewhat negatively by operations managers, considering it as something that does not add value to the operations activity, and indeed as stifling innovation and change. It is unfortunate that this quality assurance activity should be viewed as a ‘police officer’ operating in a somewhat negative way, preventing poor quality but not actively encouraging good quality. The British Quality Standard BS 5750 (now BS EN ISO 9000) and then the International Standard ISO 9000, and their associated standards, aim to correct this biased view of quality assurance. High-volume commodity-type services whose processes tend towards runners lend themselves most naturally to the quality systems approach. This is because processes can be mapped, and clear, consistent standards can be established and monitored throughout service provision. For example, many hotels have used standard operating procedures (SOPs) for a number of years covering aspects of service delivery such as the way that housekeeping cleans and prepares a room. This activity lends itself to checklists: ● Has the floor been vacuumed? ● Have the complimentary soaps and shampoos been replenished? ● Has the bed been made and turned down? ● Have the waste bins been emptied? These SOPs translate readily into processes that can be audited for compliance under a quality system. They deal with relatively tangible outcomes rather than less tangible aspects of the customer experience. Retailers, banks and contact centres attempt to measure these aspects of the customer experience by using checklists, which might include statements such as ‘Did the member of staff use the appropriate greeting?’or‘Did the member of staff thank the customer for the order?’ The advantages of using quality management systems such as those related to ISO 9000 are as follows: ● Incorporating critical elements of service delivery in a process that has been mapped, described and measured in such a way as can then be audited develops a discipline that may not have existed previously. ● External auditing and recognition of this success in the award of a certificate is good for internal morale and external reputation. ● The better quality management systems include a formal review process, which prompts the organisation to consider what needs to be done differently in order to improve. ● The process of preparing for external accreditation requires the organisation to document its processes and should be used as an opportunity for process redesign before application. In recent years, the ISO 9000 approach has been totally revised and re-launched as ISO 9000:2000. The emphasis here is on the development of a quality management system that has the objective of creating processes that reflect customer requirements and are sensitive to changing market conditions. The previous criticism of these systems was that they evaluated process adherence rather than looking at whether or not the process was appropriate for the service task. ISO 9000:2000 now concentrates more helpfully on creating the management system to deliver quality targets. Chapter 9 Measuring, controlling and managing 243