1 1 The exam Dates : • May 21 • May 28 • June 11 Starts at 4pm here at A318 Duration – 90’ Exam will be in the form of test & free form answers. Section A – 15 questions each with only one unique answer. Section B – 5 questions with more than one correct answer. Section C – 3 questions with free form answers Each question will have be rated with points which will be then summed-up in sections, weighted per section and the summ will give the result. Do not forget! Customer Services in ITSM 3 Why, how and what An end-to-end refers to a complete sequence of activities, from initiation to outcome. An end-to-end process involves the standard procedures a service or system follows from beginning to end to deliver a complete outcome, often without a third-party solution.In capturing an end-to-end process, a business owner or a project lead monitors and analyzes every process or procedure the company makes to achieve a complete outcome. Then, they decide how to make the entire process work in maximum efficiency, be it to automate some of the work or standardize a particular project flow. 4 Importance of Service Desk • Start job for talents • Helping people to find a path in IT • Brewery for leadership and other paths • Communication skills as basic tool of future success • Improvement of language skills • Real experience • Understanding the bigger picture • Face of business • Showplace for services • Key component of E2E service • E2E overlap • Ideas for business improvement • Center of success promotion • Project support For individuals For business 5 Purpose Why First line of contact for end users which directly resolves customer queries or co-ordinates with other resolver teams on behalf of the client. SD unit is also a supporting source of information for Service and Project management decision making. How • SLA’s set to measure main success criteria: • First Call Resolution % • Speed To Answer % • Customer Satisfaction % What Fastest possible resolution to as many as possible client’s IT requests and incidents with high Customer Satisfaction. Client Service Mngmt IT delivery teams Project Mngmt Service Desk What are the common reasons why clients leave? Imagine restaurant… 6 SERVICE DELIVERY TRIPOD Common customer loss reasons Cost Quality Compliance Existence § 1% of customers go out of business § 3% move to another location § 4% like to change suppliers § 5% change on a friend's advice § 9% buy it cheaper somewhere else § 10% are chronic complainers § 68% leave because the company representatives they deal with are indifferent to their needs This model main problems? Bottleneck. &. Cost 7 Traditional IT delivery structure Service Desk Bottleneck & Cost Simplified SD operations are highlighted in the chart on right. The model is focused on automatized processing of as many incoming requests as possible. If not possible, skilled CSR - Customer Service Representatives (Service Desk) will answer the incoming query either via chat/phone/e-mail or self-service (web) request. Requests that require involvement of other groups for resolution, will be transferred in form of ticket further. The primary goal is to minimize need for involvement of other groups and have queries resolved through automation (0 level) or at 1st level (Service Desk). Knowledge base is mostly personalized to each customer and can be either webbased or database based for example in Lotus notes. It’s purpose is to help to resolve customer’s query or provide appropriate process to achieve resolution (which specific data are necessary to gather and where to look for further support). As the CSR is speaking to the customer, he/she will also document the details of the call in a ticketing system. Each ticket will contain some basic customer details, machine information, problem/request classification and description, including steps taken to resolve the query. There may be slight variations from client to client, but the base remains the same. 8 Service Desk Strategy Strategy is to get the things done in more efficient way Automation Service DeskKB Other units (including vendors) Phone/Chat Phone calls and chat make usually the majority of incoming workload. To manage them effectively, SDs develop workload arrival patterns through the use of historical data. This shows the times when customers are most likely to call or chat – see the chart on right as an example of single day data. This data is used to build the necessary staffing schedules to ensure optimum number of staff available throughout the day. While there is usually as well an optimum call length, CSR can’t dictate the length of each call, but based on request classification can decide which cases are to be considered out of it’s scope to achieve necessary availability (Example: reinstallation with multiple steps and average length above 20 minutes will not be carried out by the SD and will be transferred to another group. This is fundamental to the principles of SD Operations. As eventuality cannot be forecast (e.g. Network outage), there’s need for flexibility in staff in order to immediately respond to changing call patterns (at any time slight adjustments may need to be made as reaction to significant change in incoming customer queries. 9 Service Desk Workload Imagine challenge – how to properly prioritize web queue? Service providers encourage the use of the web as a means of raising a query with the SD. The web interface is often coupled with some self help options as these are designed to encourage users to search for responses themselves before reaching the SD. The advantage of the web is that the SD Operation can respond to these queries when there is a period of low phone call or chat volume. This means there are less scheduling challenges. The disadvantage is that the CSR does not have the opportunity to ask the customer detailed problem determination questions (as he/she would over the phone) that may be required to resolve the issue. As a result, there may be insufficient information available to the CSR to answer the query. The net result of this is that the CSR may often be forced to get in contact with the customer to clarify details before the problem can be resolved. E-mails are broadly similar to web-based queries in terms of submission by the customer and pickup by the CSR. The disadvantage is that while the web-based solution can force the customer supply some critical information, the e-mail solution rarely does so. Furthermore, it is very difficult to measure the effectiveness of the SD 10 Service Desk Workload Web (Self-service) E-mail Operation in managing e-mail requests. Therefore, Service providers will try to encourage customers to use a combination of Web and Phone for raising queries with the SD. Modern trend is to automatically transform E-mails into Self-service tickets through automation processes, that are based on contents of e-mail (key-words, attachments, etc.) or remove email completely and use only webtickets. 10 CSRs are usually grouped into Teams which are approximately 15 in size, the individual will perform most of his/her tasks alone. The call will be received, the CSR will work with the customer 1:1 and query the knowledge base as appropriate to retrieve the answers to customer queries. Should the query require the intervention of another group, the ticket is usually transferred electronically by the CSR. SD Operation is planned in great detail, punctuality is therefore of utmost importance for all members. SD is a fast paced, measurement driven environment. Each CSR will be measured against multiple targets. These include punctuality, First-Call resolution, call duration/hold time usage, call quality, ticket quality, Customer Satisfaction, quantitative data as well. Given the nature of the business, data is available on a very regular basis to track performance versus targets. Each team will have a Team Leader who is responsible for the day-to-day performance of the Team. As teams can be grouped based on language or product support, they can be providing support for multiple clients/accounts. Clients are often supported through multiple teams, therefore while Team-leader is responsible for day-to-day team performance, under such circumstances there’s a need for Account Leaders as well. Account Leaders are responsible for day-to-day performance cross 11 Roles and Responsibilities Customer Service Representative / Technical support Team Leader/Account Leader/Service Desk Manager teams for their clients and ensure the differences in teams performance is balanced to deliver service within contractual targets. Each SD Manager will have 2 – 3 teams reporting to him/her, while having ownership of performance for some of the clients/accounts as well. While SD manager has ultimate responsibility for the performance of the team, he/she is less likely to be involved on the day-to-day management of the line and rather focus on long-term planning and continuity of the service. The SD Manager owns the relationship with the Customer and the Account leadership. 11 12 Roles and Responsibilities – organization chart Glossary SLA ::: Service Level Agreement - Contracted service level usually bound with penalties (STA,CSAT…) SLO ::: Service Level Objective - Commitment to maintain a particular state of the service in a given period KPI ::: Key Performance Indicator - KPI measurement may be functional to the achievement of strategic goals, quality standards, or performance assessment against any given factor ASA ::: Average Speed of Answer - This is the average time it takes to pick up a customer telephone call STA ::: Speed to Answer - This is the percentage of calls taken within contracted time CSAT ::: Customer Satisfaction - This is a measure of how satisfied customers are with the service they have received from the SD. SD ::: Service Desk - This is the formal title given to Helpdesks CSR ::: Customer Service representative - This is the official title of the ‘agents’ who work within the SD KB ::: Knowledge Base – storage of specific client knowledge documentation SPOC ::: Single point of contact – standard term for services were SD is the only 13 SD Measurements Time metrics AR (Abandon Rate) Example: AR <= 6% Percentage of dropped calls over total offered (incoming) calls. ASA (Average Speed to Answer) Example: ASA <= 20 sec The average time (usually expressed in seconds) it takes for a Service Desk to answer an incoming call. STA (Speed To Answer) Example: STA => 80% in 20 sec. The percentage of incoming calls answered within a given time frame (usually expressed in seconds). Email/Web Response Time Example: Email Resp. => 80% in 2 hrs or Avg Email Resp. <= 2 hrs Time taken to react to customer email request (can be response or ticket creation). The calculation may be based on the same principles applied to ASA or STA. Quality metrics FTF (First Time Fix), SDE (Service Desk Effectiveness) or FCR (First Call Resolution) FCR Example: FCR => 70% Percentage of eligible requests resolved by the Service Desk with no need for technical escalation. CSAT (Customer Satisfaction) Example: CSAT >80% The customers will be polled to measure their satisfaction with the service provided. This is mostly done in an automated fashion through a tool or alternatively can be done through a phone call to the customer. Given the nature of the SD business, performance data is being produced minute by minute throughout the day. The key measurements are agreed with the customer in the contract. Targets must be: attainable, repeatable, measurable, understandable, meaningful, controllable, affordable, mutually acceptable function communicating with client 13 14 Supportive mechanisms and tools 15 TOOLS SUITE STAFFING MODULE STAFFING / PERSONNEL AVAYA LINE MONITOR TICKETING CALL QUALITY MONITOR KNOWLEDGE BASE REPORTING MODULE TELEPHONY CALL HANDLING REPORTING A SD operation will employ a complex system of tools in order to effectively manage the service it provides. These tools will usually be divided into the following categories : Staffing & Personnel Management, Telephony, Call Handling and Reporting. STAFFING MODULE ::: As previously mentioned in this package, it is critical for the success of a SD Operation to have the correct number of people staffed on the line through the day. This staffing is based on historical data which can be used to predict the number of calls expected. This, in turn, is used to calculate the associated number of people required to handle these calls. The Staffing Module can be a complex tool which is integrated into the phone system and extracts data on a real time basis while some SD operations use more simplistic Spreadsheet based tools. Regardless of the solution, the Staffing Module will be responsible for items such as scheduling, vacation management, break management and training scheduling. 16 TOOLS SUITE STAFFING MODULE STAFFING / PERSONNEL AVAYA LINE MONITOR TICKETING CALL QUALITY MONITOR KNOWLEDGE BASE REPORTING MODULE TELEPHONY CALL HANDLING REPORTING How would you ensure monitoring and SLA delivery? TELEPHONY MODULE ::: The telephony module is the core of the SD Operation. Without this, there is no Service Desk. These systems gives the call centres their shape. All SDs will use a tool in order to monitor real time call activity (how many calls in the queue, actual performance against targets etc). These systems also allow the management to monitor CSR activity (how many people are on calls, how many people are available to take calls etc). Such tools are very powerful but need to be leveraged to ensure that all targets are achieved. As a SD is a customer-focused service, the quality of each and every call is of utmost importance to the Management. Therefore, it is commonplace to deploy a system which can record a sample of the calls received by the centre. These are then evaluated against a set of standards and the individual CSR may receive coaching, as required. 17 TOOLS SUITE STAFFING MODULE STAFFING / PERSONNEL AVAYA LINE MONITOR TICKETING CALL QUALITY MONITOR KNOWLEDGE BASE REPORTING MODULE TELEPHONY CALL HANDLING REPORTING How would you build a KB? CALL HANDLING MODULE ::: Once a CSR has received a call, he/she is required to document all customer interactions in a ticketing system. It is important to carefully gather basic information about the customer’s issue as these are essential for swift problem resolution. Furthermore, should the CSR be unable to resolve the query, the information will be passed electronically (though the ticketing system) to another group. These details will be by required by this group also so they need to be clearly documented in the ticket. In order to help the CSR with the resolution of the problem, he/she will use a knowledge base of information. By asking the customer clear problem determination questions and using this information to query the database, the CSR will be able to find detailed, step-by-step instructions to resolve the customer’s problem. 18 TOOLS SUITE STAFFING MODULE STAFFING / PERSONNEL AVAYA LINE MONITOR TICKETING CALL QUALITY MONITOR KNOWLEDGE BASE REPORTING MODULE TELEPHONY CALL HANDLING REPORTING REPORTING MODULE ::: A SD operation is measured against a number of key targets (STA, Abandonment Rate, First Call Resolution and Customer Satisfaction). While the exact nature of the targets may vary from account to account, the requirement to produce performance reports at regular intervals does not. SD Operations often deploy automated tools to produce the required customer reports. This can include a automated survey tool which sends an electronic survey to customers and tabulates the responses or web based solutions that report on the performance of the account versus telephony metrics (ASA, Service Level and Abandonment Rate). Given the volume of work handled by SD Operations, it is important to automate as much of the reporting as possible as manual report generation is both time consuming and prone to error. 19 TOOLS SUITE STAFFING MODULE STAFFING / PERSONNEL AVAYA LINE MONITOR TICKETING CALL QUALITY MONITOR KNOWLEDGE BASE REPORTING MODULE TELEPHONY CALL HANDLING REPORTING 20 Example Business setup 21 21 Kyndryl Client Innovation Centres aka Collaboratives Poland Central Europe Johannesburg India Costa Rica Buenos Aires Hortolandia Malaysia Philippines China 22 22 Supporting more than 500 clients Across all European regions as well as Global accounts Range of services Storage Management Mainframe Server Management Data Management Data Management Distributed Application Hosting Server Management Mobility & Workplace Platform Management Mobility and Workplace Mobility & Workplace Device Management MWS Cross Service Line Mobile Client Care Services – Service Desk Automation Network services Incident, Problem & Change Management Integrated Service Management Incident, Problem & Change Coordination Service Availability Managers Delivery Project Executives Service Support Management RFS T&T / Project Services Transition and Transformation Delivery Transformation Project Office Management Identity & Access / Infrastructure Protection Security and Risk Management Compliance & Regulatory Program Management Security Operations Management System Currency Client Management Asset Management Language skills English German French Czech Russian Spanish Dutch Italian Turkish Hungarian Portuguese Polish Greek Slovak Brazilian 23 Average of Female Graduate from IT Universities is 10% Other 43%Domestic 57% Czech Republic People Citizenship Other 1% Domestic 99% Hungary Kyndryl Collaboratives Brno Employees Demographics & Diversity Female 33%Male 67% Czech Republic Gender Diversity Female 26% Male 74% Hungary With University Degree… Without University Degree… Czech Republic University Degree With University Degree… Without University Degree… Hungary § 15 of the world’s largest Banks & Financial Companies § 11 of the world’s largest Industrial Products & Retail Companies § 11 of the world’s largest Media & Communication Companies § 9 of the world’s largest Insurance Companies § 9 of the world’s largest Healthcare and Life Sciences Companies § 8 of the world’s largest Electronics Companies § 8 of the world’s largest Automotive Companies § 5 of the world’s largest Airline Companies AP Languages Arabic Language Chinese Dutch, Flemish Eastern European English French German Italian Japanese Nordics Portuguese Spanish Designated MMS for Mac Service Desk delivery site San Jose, Costa Rica Buenos Aires, Argentina Dublin, Ireland Brno, Czech Republic Wroclaw, Poland Cairo, Egypt Johannesburg, South Africa Cyberjaya, Malaysia Manila, Philippines Shenzhen, China Dalian, China Noida & Gurgaon, India Bangalore, Hyderabad India Hortolandia, Brazil Boulder, USA Kyndryl Service Desk – Global Footprint Collaboratives Czech Republic (Brno) Incoming contacts: 60,000 contacts/month Languages: 16 languages Supported countries: 50+ across all continents Supported clients: 22 with individual requirements Customer satisfaction: 90% average customer satisfaction § 15 of the world’s largest Banks & Financial Companies § 11 of the world’s largest Industrial Products & Retail Companies § 11 of the world’s largest Media & Communication Companies § 9 of the world’s largest Insurance Companies § 9 of the world’s largest Healthcare and Life Sciences Companies § 8 of the world’s largest Electronics Companies § 8 of the world’s largest Automotive Companies § 5 of the world’s largest Airline Companies AP Languages Arabic Language Chinese Dutch, Flemish Eastern European English French German Italian Japanese Nordics Portuguese Spanish Designated MMS for Mac Service Desk delivery site San Jose, Costa Rica Buenos Aires, Argentina Brno, Czech Republic Wroclaw, Poland Cairo, Egypt Johannesburg, South Africa Cyberjaya, Malaysia Manila, Philippines Shenzhen, China Dalian, China Noida & Gurgaon, India Bangalore, Hyderabad India Hortolandia, Brazil Boulder, USA Kyndryl Service Desk – Global Footprint Local management Delivery Partner Executive Client Executive Local management Service Desk Director Local Client Site Leaders Local Client Site Leaders Internal alignment and cross CIC cooperation Executive reporting and communication Optional specific local operational cooperation Optional specific local operational cooperation 26 Business Continuity & Disaster Recovery 26 27 Delivery management 28 Analytics Right-2-Left Regular checks on tickets resolved by OSS with aim to find remotely resolvable cases improving reaction time, CSAT and cost. Next each newly identified case is recorded to knowledge base. Activity is driven by each account’s CTS person. R2L is measured as comparison of tickets closed by SD, CTS and OSS. Outputs are presented to clients. Mis-assigned Flagging system established (tag or worklog type) on most accounts. Supports cooperation cross groups with direct efficient focus on issue fix and RCA targeting. Reaction time to fix the flagged ticket is only 1 day. Best practice cross accounts, usually driven by Service Leader or CTS. Outputs are presented to clients. CSAT analysis Each DSAT is analyzed by quality team, each user is called or emailed. Process is standard with minimum account deviations. 3 types of actions are derived based on causing unit SD caused – direct feedback IBM caused – addressed to account Client help needed – addressed to client or account team. 29 Escalation process and internal reviews Continuous business review Monthly review per team account lead by Business Operations Manager covering main topics in the unit, team, account, unit strategy, operational points, people development… Escalation process Account Customer care mailboxes streamlining escalation flow and prevent fragmented communication. Tickets flagged for easy tracking and for daily overview and speeding up of the processing. Monthly overview of escalated tickets to close the loop can be created. Count of TICKETID Month closed Type 3 4 5 6 7 8 9 10 11 12 1 2 in progres being closed Ticket chase out of SLA 10 23 45 32 35 35 25 28 31 42 41 9 8 2 Ticket chase in SLA 19 28 19 14 14 15 26 23 16 14 21 3 1 Escalation 8 9 11 9 12 12 2 25 11 13 7 2 2 Processed OK 1 3 1 1 3 4 1 Grand Total 38 63 76 55 61 63 56 76 62 70 69 14 11 2 Average age 13,5 9,8 10,8 23,5 15,4 21,7 25,8 19,5 13,5 15,1 14,3 7,1 14,4 3,0 Median age 4,2 4,6 8,1 7,2 9,8 12,6 6,1 7,8 5,0 5,9 8,7 5,3 8,4 3,0 O365 5 11 21 15 11 11 11 9 8 11 18 1 3 Phone 7 8 8 4 8 5 5 14 12 11 12 3 1 1 Network 3 4 9 4 5 6 8 2 9 12 8 1 2 Account 4 9 8 10 6 9 3 4 6 7 7 VPN 4 7 11 4 7 10 6 2 3 4 4 2 3 PC other 2 1 3 3 2 2 6 7 5 5 5 2 1 1 LN 3 2 2 3 4 1 5 7 2 6 4 Access 1 2 8 1 3 3 2 6 3 2 1 Drive 1 5 2 2 3 4 2 2 3 4 1 SAP 2 3 1 1 3 2 3 6 3 1 1 2 Printer 3 5 4 2 3 1 2 1 1 SLA-HOLD 15 1 2 Hardware 2 1 4 2 1 1 3 2 1 Voip 2 1 1 2 4 3 1 Security 1 1 3 1 2 1 1 Server 1 1 1 2 2 1 Restore 2 1 Win 10 migration 1 1 ICD 1 1 Grand Total 38 63 76 55 61 63 56 76 62 70 69 14 11 2 Escalations per type (based on closed month) Detailed Split per type Delivery management Live reporting workloads visible to all people Half-hourly reports for leaders Daily Service Level and KPI performance overview (risk assessment and decision making) Regular cross geography touchpoints for MTD results (weekly in BAU) 30 External quality system Interlocks Weekly interlocks are serving operational purposes, usually excel AP Monthly interlocks are designed for service overview, results of detailed analysis – eg. misassigned, R2L, escalations, call drivers, CSAT, progress of bigger points of improvement, SLA review… Based on the size the account, the pack varies. Governance and reporting During initial phase, there is a daily interlock with higher amount of daily data. In BAU, there is only weekly and monthly interlock. Each report must has a specific purpose to measure success of underlaying actions, reports not driving anything are not accepted or cancelled. Strategy planning Strategic planning is ensuring execution of meaningful longterm plans. Creation of such plan usually requires client visit and ideally visit of IBM on client site to gather the necessary feelings and data to define matching problem statements, objectives, goals, tasks and measurements. This is reviewed every month, both IBM and client have usually part of the actions to drive. IBM helps to identify such cases and provides best practices far these. We offer an integrated suite of capabilities designed to enhance employee productivity, manage costs and deliver a superior personalised experience. As you can see, the user can choose how they get the support they need. We will work with you to define how this omni-channel approach looks and presents to end users. The omni channel support service includes support via virtual agent, self-support capabilities, live-agent assist through chat and phone, Level 1.5 support, deskside support and client centres. All these are powered by IBM’s analytics capabilities. 31 Digital Workplace Support An integrated, omnichannel service 3 1 Digital Workplace Services Self-enablement & Virtual Support LiveAgent Chat LiveAgent Phone Level 1.5SupportDesksideSupportClient CenterEndPoint Analytics Incident Management Customer Satisfaction ServiceAnalytics Customers We appreciate that you may be in one of many stages in your Workplace Transformation, and our objective is to work with you to ensure your service transforms at a pace that drives the maximum return on investment for you, creates a great user experience and isn’t disruptive to your end users. The first part of this transformation is ensure that you have the right foundation from which to transform your service; we first need to ensure we understand what types of issues you have, drive process efficiencies, build user self-help content and use simpler automation, such as automatic password reset, to ensure we hit a return on investment quickly, and steer your transformation in the right direction. In the next few slides, we will talk about our analytics, that will help create the insights we use. Once we have a clear understanding of your users and their issues, we will implement solutions that resonate with end users, empowering them to get their issues fixed quicker. Here, though using AI and Automation will reduce the number of issues users face, augment their channels of support and make these more personal, provide immediate support for a range of issues, and create an overall superior user experience. Vendors works with clients to build a roadmap for the future of their IT End User Support. Automation and Cognitive Cognitive Support Productivity and Analytics Traditional Semantic Ticket Analytics Self-Healing Persona Driven Support Robotic Process Automation Digital Experience Monitoring Personalized Support Advanced ITSM Tooling Detailed User Facing Knowledge Content Multi-Channel Support Service Desk Efficiency Optimization Automated Password Reset Ticket Analysis and Transformation Single Point of Contact Level 2 Level 1 Virtual Personal Assistant I talked about the drivers that are shaping the way we are thinking and building the service desk of the future. Now, I want to touch on our primary goals behind all of this work. The three primary goals for our workplace support services are delivering a superior end user experience, lowering the total cost of ownership and driving business outcomes. These three factors are interdependent and therefore cannot be looked at in isolation. By focusing our solutions on providing a great end user experience, we aim to increase user satisfaction with our transformation. Creating a great end user experience requires us to ensure the solutions we implement are of high quality, and are simple and effective for users to get benefit from; this starts with driving problem avoidance and increasing the speed that users can get the support they need. This will increase user productivity, these experience improvements are business outcome focused. To meet our objectives of being experience and business outcome focused we leverage our analytics and cognitive systems to predict potential issues and stop them from occurring, and identifying issues in realtime and proactively resolving them before they impact users* At the same time, by creating a great end user experience, we improve the collaboration between end users and our Workplace Support Services, and ensure our strategic initiatives provide visible value to end users. This increases users’ engagement with these initiatives, which increases the utilization of our end user facing tools, adds value to the business and reduces the total cost of ownership. As way of example, our Virtual Personal Assistant solution provides a one stop shop for a user’s to get immediate IT Support. As the Virtual personal assistant enables users to get answers, automate fixes and raise tickets quickly, this improves user productivity (Business outcomes) and delivers a great experience. This in turn drives adoption in self-service solutions, by empowering the user; which in turn drives significant reduction in the total cost of ownership. NOTE TO SPEAKER: * This is a very important factor for us to get into, and I encourage when you are talking to a client to understand some of their pain points. That is, understand what their industry is doing and some of the things that may be salient to them from a business outcomes perspective. Put yourself in the shoes of some of the folks that are calling the service desk. Try to understand if they are client facing themselves. Try to understand whether or not some of our proactive technologies could eliminate issues from ever occurring and/or help workers provide better customer service for the company that they are representing. We want to be able to drive those business outcomes. For 2019, this the most important focus point for us -- that we are developing and delivering technology that is bringing improved business outcomes. **We want to really highlight the proactive and predictive capabilities through analytics and automation. 33 The Workplace Support Services with AI is focused on transforming IT support Lower total cost of ownership Drive business outcomes Deliver a superior end-user experience – Automation and self-service provide low- or no-touch problem resolution – Enables workforce to become more self sufficient and productive – Ubiquitous, persona-driven architecture enables a personalized experience – Maintains contact no matter the device and enables “follow me” support regardless of channel – Predicatively or proactively resolves issues – Self healing drives down number of incidents and reduces MTTR What’s ITXM™ ? Experience management is a practice that has been used in external customer support and management – via the concept of customer experience (CX) management – for well over a decade. Whereas in IT operations, IT service management (ITSM) best practices have been employed for process operations and wider management – but this has never focused on the delivered Experience or, more specifically, held Experience as a critical outcome of the employed ITSM processes. ITXM brings experience management principles into IT operations. This allows IT teams to focus on the outcomes of their work and to provide data-informed insights that fuel the continual improvement of those outcomes – including making end-users happier with IT services and more productive in their daily work. Importantly, ITXM applies to the whole IT organization, not just the IT service desk and wider ITSM capabilities. Using the ITXM™ Framework’s continuous cycle (as shown), end-user experiences are measured and shared with various business stakeholders (including IT) and third parties such as partners, vendors, and shareholders. Issues and opportunities are identified using the real-time experience data, highlighting where people are frustrated and losing the most productivity with IT services and support. This 34 The IT Experience Management (ITXM) Framework approach is in line with the DevOps “second way” of amplifying feedback loops such that “corrections” can be continually made. Finally, improvements are made, based on business value, such that IT operations and outcomes start to bring more smiles and less wasted time for both end-users and IT personnel in their daily work – which aligns well with the ITIL 4 Guiding Principle of “focus on value.” As improvements are actioned, the measurement continues, and both the absolute state and the achieved progress can now be shared – with an organization continuing around the Framework cycle, tackling more end-user issues and opportunities as the benefits of ITXM™ permeate the organization. 34 What are XLAs? Experience Level Agreements (XLAs) are a reimagining of SLAs that focus on what’s most important to end-users. Within these, XLA targets are end-user-centric metrics and key performance indicators (KPIs) that focus on the perceived quality of IT services and support. They measure the performance of IT by quantifying the enduser experience and IT service outcomes. Ultimately, XLAs measure performance in outcome and value terms, whereas SLAs usually focus on operations and outputs. SLAs measure the process or the completing of an objective. XLAs, on the other hand, measure the outcome and the value of the service provided. Currently, most organizations are using Service Level Agreements as their key performance metrics. However, relying solely on SLAs can cause something called the Watermelon Effect which is a common phenomenon in IT organizations where the KPI's and Service Level Agreements indicates that everything is going well, but endusers report they are having poor experiences with IT. XLAs don’t have a universal formula that companies should follow. Instead, XLAs should be tailored to an organization’s IT problem areas, and the outcome and value 35 SLAs are a documented agreement between the service provider and the customer that identifies the services required and the expected levels of service. XLAs are a contract between a service provider and a customer founded on the quality of the service experience with the provider’s services. Experience Level Agreements aka XLAs they want to achieve. We can use many UX metrics and KPIs (key performance indicators) to measure user experience with quantitative methods. Some of the most common ones are: •Happiness: This metric measures the user's satisfaction, enjoyment, and preference for a product or service. It is usually collected via surveys that ask users to rate their experience on a scale (e.g., 1–5 stars, 1–10 points, etc.). Some examples of happiness metrics are Net Promoter Score (NPS), Customer Satisfaction Score (CSAT), and Product Satisfaction Score (PSS). •Engagement: This metric measures the user's involvement and interest in a product or service. It is usually collected via analytics that track user behavior on a website or app (e.g., page views, time spent, bounce rate, etc.). Some examples of engagement metrics are Session Duration, Pages per Session, and Bounce Rate. •Adoption: This metric measures the user's acquisition and adoption of a new product or feature. It is usually collected via analytics that track user behavior on a website or app (e.g., sign-ups, downloads, activations, etc.). Some examples of adoption metrics are Conversion Rate, Activation Rate, and Feature Usage Rate. •Retention: This metric measures the user's loyalty and retention of an existing product or service. It is usually collected via analytics that track user behavior on a website or app (e.g., return visits, churn rate, retention rate, etc.). Some examples of retention metrics are Churn Rate, Retention Rate, and Customer Lifetime Value (CLV). •Task Success: This metric measures the user's efficiency, effectiveness, and error rate when completing a specific task with a product or service. It is usually collected via usability tests that observe user behavior on a website or app (e.g., task completion rate, time on task, number of errors, etc.). Some examples of task success metrics are Task Completion Rate (TCR), Time on Task (ToT), and Error Rate. 35 XLA mistakes in comparison to SLAs 1. Avoid positioning XLAs in the same way as SLAs The purpose of your SLAs may be wrong. You cannot measure experience with an SLA metric, nor use it as a measurement for improvement. SLAs also may only be in an organization because: •ITIL says so •IT departments use them to defend their performance to stakeholders •They've been inherited from previous teams or tools •IT believes “we need them” to be able to control our providers Positioning SLAs this way creates no real direction for IT, won't create a positive change for end-users or service desk agents, or give you a true insight into your IT service experience. Therefore, you also need to apply this to XLA positioning. Create a goal and meaning behind each XLA you design and define how it's going to add value to your end-users. Setting up an XLA just because industry experts (or this guide) say so, isn’t enough to do it. Discover what you want to solve or improve for an end-user with XLAs before you set them. 36 SLA - Service Level Agreement XLA - Experience Level Agreement Measures the output of IT. Measures the outcome of IT. Measures the processes. Measures the added value and productivity of services. SLAs focus on high level objectives, that can easily be met. However, they don't paint an in-depth picture of what is really happening within IT. Brings focus directly to end-users' experience and needs. SLAs show if IT is delivering projects within the right time frame and budget - ignoring the true success measure(s) of projects. With XLAs you can bring business value and increase productivity of end-users. Focus on sanctions. Focus on rewards. Measurement stays the same. Measurement target levels constantly change. SLA vs. XLA You can still use SLAs in conjunction with XLAs, however, make sure you're measuring the right things - i.e. XLAs to measure experiences and outcomes, SLAs to measure outputs. 2. Avoid creating XLAs in the same way as SLAs SLAs have likely had no, or barely any, involvement from end-users. This is one of the biggest mistakes you can make when devising XLAs. Experience level agreements should be directly linked to end-user experience, therefore it is pivotal that you involve them in XLA discussions. Another common mistake is using a “one-size fits all” approach, similar to what has been found in SLA usage. XLAs need to be tailored to experience and an organization’s problems or needs. Furthermore, SLAs focus on what you can physically measure: time, budgets, tickets, etc. This completely misses the most important thing - the end-user’s experience. Make sure that XLAs are measuring the most important and valuable outcomes instead of traditional operational data. 3. Don’t use XLAs the same way as SLAs SLAs have traditionally been used to show targets being hit, not as an indicator for improvement. This tends to be down to what SLAs are set to measure. XLAs should be the driver for change and the springboard to act on your end-users’ pain points within their service experiences. Another negative use of SLAs has been that they've remained the same, measuring the same outputs over time. This means they're rarely reviewed or updated to reflect changing business needs and priorities. With XLAs, you need to be continuously measuring your end-users experience and then updating your XLAs according to the received experience data. End-users’ experiences will change over time, therefore different goals and targets will need to be set in order to meet end-users’ needs. It could even be said that the most important usage of XLAs or any experiencerelated measurement is to identify areas to improve. SLAs might help you to understand what is wrong with the process or service itself, but they can't be used to determine where the experience is lacking. 36 37 38 What will be next? See you on May 14th