Slide 7 The essential difference between the two models is that under a managed services model (outsourcing), the provider is committed to delivering an “outcome” at a defined price versus an “input” as under the staff augmentation model. An input is simply the performance of an activity with no commitment that the activity will result in the desired outcome. The managed service model drives a measure of value based on planning, as the organization must define the requirement on a service and performance criteria basis. Pricing is tied to the outcome. Should the service requirement diminish or disappear, the associated costs react in kind. This provides the “scalability to demand” often sought in a staff augmentation model, but scalability that is tied to service. Linked to managed services is a service commitment. Under the staff augmentation model, the only service commitment is hours of work. Under the managed services (outsourcing) model, the provider assumes all of the risk of meeting the service commitment. The value creation is huge. As the provider assumes the delivery risk at a fixed cost, the provider is highly incentivized to establish productivity measures required to meet the service commitment. This manifests itself in the implementation of tools and processes, as well as extensive documentation, as the provider cannot afford to risk not meeting the service commitment by relying on individuals. Documentation and process rigor also allow the service provider to move work through a global delivery structure with ease. Through the application of documentation, tools and processes, the service provider is able to deliver services reliably with fewer, more productive resources. The managed services (outsourcing) model therefore is structured to deliver a commercially viable, low cost service offering to the organization. From the standpoint of what an organization really wants from IT, the managed services (outsourcing) model delivers the following advantages: a predictable low price/cost service/outcome; scalability based on business demand; fewer delivery risks; and operational performance metrics tied to process excellence, documentation and outcomes. Managed services organizations are generally large and serve multiple clients from multiple locations. As opposed to smaller staff augmentation organizations (or individual contractors), managed services organizations have the capability of delivering a wealth of skills and capabilities. Client organizations have access to a broad base of skills, solutions and knowledge to meet evolving requirements. A managed services (outsourcing) model delivers all of the skills access and flexibility of a staff augmentation model. Because the model relies heavily on management and process rigor, clients generally experience an elevated capability themselves. Slide 8 A key question new standards authors and owners often ask is “who will be using my content?” When we think about an e2e standard solution, we realize there are several different aspects and components, where each is important to particular roles performing specific functions. For that reason, GSAR users span a variety of roles and its content applies to the entire scope of the account lifecycle, from sales & engagement, to transition & transformation to steady state, both for new and incremental business. Let’s look at a broad view of the ITD account life cycle to see how GSAR fits into this picture. The life cycle begins as sales teams present clients with high-level information of what IBM has to offer. During this phase, the intellectual capital in GSAR helps answer questions such as: “What services do we sell?” and “What are typical client wants and needs?” Later during engagements, Technical Solution Managers (TSMs) and Architects (TSAs) develop solution designs and proposals describing the service(s) we can deliver, how we deliver them, and at what costs. The standards in GSAR provide a foundation for them to work from, and answer questions such as: “How is the solution designed?” and “What are the cost drivers?” Once clients have signed the contract with IBM, ITD Competency leaders, architects, Delivery Project Executives (DPEs), and Subject Matter Experts (SMEs) work to transition IBM into assuming the IT responsibilities for the client. Finally, delivery and account teams take over operational responsibilities during the “steady state” period. GSAR content is used during both of these phases as well. Slide 11 We will differentiate by applying assets through automation, our work practices and processes that we reflect our point of view on the optimal way to deliver specific services and through sourcing the right skills you need globally. We will discuss most of these bullets in the next pages specifically. Slide 12 We utilize three key levers to drive quality and productivity… The first, standardization, is all about industrializing service delivery. We have embarked on the quality journey long time ago and have made significant progress with our quality methods, focusing on process simplification and eliminating non-value add steps. We have broadened our continuous quality roll-out across all geographic locations ... The second lever is Automation. Here we leverage IBM hardware, software and Research assets extensively. A great example is our deployment of Maximo to implement standard best practice workflows ... leveraging this tool from SWG allows us to pool delivery resources and drive skill depth for quality and productivity gains… Lastly, skills are critical in a delivery business ...and as over 50% of our delivery costs are labor, leveraging the right talent globally, at the best cost, is vital. Equally as vital is continually looking at the skills we have, where we may have gaps and the training/certifications that are needed to fill those gaps ... I'll also cover this in a little more detail... So, let's look at some specifics on how we execute on these three levers… Slide 15 On this slide, you see the client account and vendor account team roles on the left, our Delivery organization on the right -- including the Service Owner roles -- and the two parties coming together in the middle. The message here is how important our standardization strategy is, in our outsourcing relationships. With this strategy, vendor is able to clearly articulate our company's IT standards, particularly, what standard services we offer and how they are delivered. Clients have IT standards of their own: such as existing applications, roles and processes. For an effective vendor integration into the client's existing environment, it’s essential to have clear communication and knowledge sharing. Therefore, the Account Team and the Service Provider (Delivery) need to be working from the same, shared information. Four (4) key assets are typically used to facilitate our collaboration are: a Service Catalog, a Solution Repository, a Deployment Portfolio, and a Reference Architecture. Slide 17 What Are Key Components of an SLA? The SLA should include components in two areas: services and management. Service elements include specifics of services provided (and what's excluded, if there's room for doubt), conditions of service availability, standards such as time window for each level of service (prime time and non-prime time may have different service levels, for example), responsibilities of each party, escalation procedures, and cost/service tradeoffs. Management elements should include definitions of measurement standards and methods, reporting process, contents and frequency, a dispute resolution process, an indemnification clause protecting the customer from third-party litigation resulting from service level breaches (this should already be covered in the contract, however), and a mechanism for updating the agreement as required. This last item is critical; service requirements and vendor capabilities change, so there must be a way to make sure the SLA is kept up-to-date. Slide 20 What should I consider when selecting metrics for my SLA? Choose measurements that motivate the right behavior. The first goal of any metric is to motivate the appropriate behavior on behalf of the client and the service provider. Each side of the relationship will attempt to optimize its actions to meet the performance objectives defined by the metrics. First, focus on the behavior that you want to motivate. Then, test your metrics by putting yourself in the place of the other side. How would you optimize your performance? Does that optimization support the originally desired results? Ensure that metrics reflect factors within the service provider's control. To motivate the right behavior, SLA metrics have to reflect factors within the outsourcer's control. A typical mistake is to penalize the service provider for delays caused by the client's lack of performance. For example, if the client provides change specifications for application code several weeks late, it is unfair and demotivating to hold the service provider to a prespecified delivery date. Making the SLA two-sided by measuring the client's performance on mutually dependent actions is a good way to focus on the intended results. Choose measurements that are easily collected. Balance the power of a desired metric against its ease of collection. Ideally, the SLA metrics will be captured automatically, in the background, with minimal overhead, but this objective may not be possible for all desired metrics. When in doubt, compromise in favor of easy collection; no one is going to invest the effort to collect metrics manually. Less is more. Despite the temptation to control as many factors as possible, avoid choosing an excessive number of metrics or metrics that produce a voluminous amount of data that no one will have time to analyze. Set a proper baseline. Defining the right metrics is only half of the battle. To be useful, the metrics must be set to reasonable, attainable performance levels. Unless strong historical measurement data is available, be prepared to revisit and readjust the settings at a future date through a predefined process specified in the SLA. Slide 24 What Kind of Metrics Should be Monitored? Many items can be monitored as part of an SLA, but the scheme should be kept as simple as possible to avoid confusion and excessive cost on either side. In choosing metrics, examine your operation and decide what is most important. The more complex the monitoring (and associated remedy) scheme, the less likely it is to be effective, since no-one will have time to properly analyze the data. When in doubt, opt for ease of collection of metric data; automated systems are best, since it is unlikely that costly manual collection of metrics will be reliable. Depending on the service, the types of metric to monitor may include: Service availability: the amount of time the service is available for use. This may be measured by time slot, with, for example, 99.5 percent availability required between the hours of 8 am and 6 pm, and more or less availability specified during other times. E-commerce operations typically have extremely aggressive SLAs at all times; 99.999 percent uptime is a not uncommon requirement for a site that generates millions of dollars an hour. Defect rates: Counts or percentages of errors in major deliverables. Production failures such as incomplete backups and restores, coding errors/rework, and missed deadlines may be included in this category. Technical quality: in outsourced application development, measurement of technical quality by commercial analysis tools that examine factors such as program size and coding defects. Security: In these hyper-regulated times, application and network security breaches can be costly. Measuring controllable security measures such as anti-virus updates and patching is key in proving all reasonable preventive measures were taken, in the event of an incident.