Available online at www.sciencedirect.com 8CIE NCE^D.REOT. OXTK^QCl The International Journal of Management Science ELSEVIER Omega 33 (2005) 283-306 www.elsevier.com/locate/dsw Best practices in business process redesign: an overview and qualitative evaluation of successful redesign heuristics H.A. Reijersa'*, S. Liman Mansarb ^Department of Information and Technology, Faculty of Technology and Management, Eindhoven University of Technology (PAVD14,) P.O. Box 513, Eindhoven, 5600 MB, Netherlands bDepartment of Computing, Communications Technology and Mathematics, London Metropolitan University, 2-16 Eden Grove, London N7 8EA, UK Received 25 April 2002; accepted 23 April 2004 Abstract To implement business process redesign several best practices can be distinguished. This paper gives an overview of heuristic rules that can support practitioners to develop a business process design that is a radical improvement of a current design. The emphasis is on the mechanics of the process, rather than on behavioral or change management aspects. The various best practices are derived from a wide literature survey and supplemented with experiences of the authors. To evaluate the impact of each best practice along the dimensions of cost, flexibility, time and quality, a conceptual framework is presented that synthesizes views from areas such as information systems development, enterprise modeling and workflow management. The best practices are thought to have a wide applicability across various industries and business processes. They can be used as a "check list" for process redesign under the umbrella of diverse management approaches such as Total Cycle Time compression, the Lean Enterprise and Constraints Management. © 2004 Elsevier Ltd. All rights reserved. Keywords: Business process redesign; Operations management; MIS; Heuristics 1. Introduction A business process redesign (BPR) initiative is commonly seen as a twofold challenge (e.g. [1-3]): • a technical challenge, which is due to the difficulty of developing a process design that is a radical improvement of the current design, • and a socio-cultural challenge, resulting from the severe organizational effects on the involved people, which may lead them to react against those changes. Apart from these challenges, project management of a BPR initiative itself is also often named as a separate BPR challenge (e.g. [4]). *Tel.: +31-40-247-2290; fax: +31-40-243-2612. E-mail address: jreijers@win.tue.nl (H.A. Reijers). Many methodologies, techniques, and tools have been proposed that face one or more of the mentioned challenges in a more or less integrated approach (for an overview see [5]). Prescriptive literature in the field is sometimes advertised as "a step-by-step guide to business transformation" (e.g. [1]) suggesting a complete treatment of the organizational and technical issues involved with BPR. However, work like this seems to be primarily aimed at impressing a business audience. At best it gives some directions to manage organizational risk, but commonly lacks actual technical direction to (re)design a business process. Even the classic work of Hammer and Champy [6] devotes only 14 out of a total of over 250 pages to this issue, of which 11 pages are used for the description of a case. Gerrits [7] mentions: "In the literature on BPR, examples of successful BPR implementations are given. Unfortunately, the literature restricts itself to descriptions of the 'situation before' and the 'situation after', giving very little information on the 0305-0483/$ - see front matter © 2004 Elsevier Ltd. All rights reserved, doi: 10.1016/j .omega.2004.04.012 284 H.A. Reijers, S. Liman Mansar I Omega 33 (2005) 283-306 redesign process itself. According to Motwani et al. [8], in the meanwhile, research in BPR progressed slightly to also include the development of conceptual models for assessing and executing BPR. However, the main criticism to these models/steps is that there has been little effort to use the existing theory to develop a comprehensive integrated model on BPR. Valiris and Glykas [9] also recognize as limitations of existing BPR methodologies that "there is a lack of a systematic approach that can lead a process redesigner through a series of steps for the achievement of process redesign". As Sharp and McDermott [10] commented more recently: "How to get from the as-is to the to-be [in a BPR project] isn't explained, so we conclude that during the break, the famous ATAMO procedure is invoked—And Then, A Miracle occurs". In our research we are interested in developing a methodology for BPR implementation based not only in detailing steps for BPR but also on guiding and supporting the BPR execution by means of techniques and best practices. In this context our first concern is to adopt (or define) an existing framework for BPR. We will not try to present yet another integrated BPR methodology, the framework should only allow the user of the BPR methodology to recognize the important topics and their relationships. The second concern is to identify among the literature and the successful execution of current BPR implementations the best practices that may/should be used for each topic of the framework. Brand and Van der Kolk's [11] evaluation framework will be used to assess the (supposed) effects of a best practice on cost, quality, time and flexibility. Our final concern is to guide the users to when and in which order to apply these best practices. This latter point also includes guidance towards the limits of these best practices and their validity domain. This involves an extensive study of all the best practices identified. In this paper, we will only focus on the first and second concern of our research, namely: • defining a framework for BPR implementation and • identifying the best practices in BPR implementation. The best practices which are identified should be seen as independent rules of thumb, each of which can be of value to support practitioners in facing the technical challenge of a BPR project. Merely applying these rules, however, is unlikely to lead to sustained success. In the first place, the BPR practices we will discuss focus on the mechanics of the process and do not cover how the behavior of people working within the process can be influenced. Anybody who conducted a BPR project realizes that the latter is a crucial factor in making a process transformation successful. Secondly, the application of these various best practices must be embedded within an overall vision on BPR that is adopted for the project. Several well-known management philosophies exist that can guide the overall course of a reengineering project, such as Total Cycle Time Compression [12,13], the Lean Enterprise approach [14] and Constraints Management [15,16]. Although a discussion of these various approaches is outside the scope of this paper, it is important to point out here that the best practices we discuss should be seen as being on a lower, more operational level than these encompassing approaches. Many of the best practices we mention do have a wide application across these approaches. For example, consider the case of the reengineering of a manufacturing company as in [17]. This BPR project was driven by a Total Cycle Time Compression approach in which several best practices we list in this paper were applied, such as empowerment and the introduction of process-wide technology. Another example is the task elimination best practice, which originated from the same experiences within the Toyota company that shaped "lean thinking" as an overall management philosophy [18]. In summary, we believe that adopting an overall management vision on BPR is a necessary condition for making the application of BPR best practices effective and to give direction to a BPR effort. And in return, the implementation of such a BPR vision can be helped by considering the best practices we present in the rest of this paper. The structure of the paper is now as follows. First we will present a framework for BPR implementation in Section 2. It will serve as a guidance to which topics should be considered when implementing BPR. Before we discuss the various best practices, we will describe a model in Section 3 that serves as a frame of reference for their assessment. Next we will describe the BPR best practices in Section 4. For each best practice, we will present its general formulation, its potential effects and possible drawbacks. We will also indicate similarities in best practices, provide references to their origin and—if available—to known quantitative or analytic support. A summary of all contributions to the best practices will be analyzed in Table 1. The paper ends with our conclusions and future research. 2. A business process redesign framework In order to help the user in choosing the correct best practice when dealing with the implementation of BPR, it is important to define clearly a framework for it. The idea behind a framework is to help practitioners by identifying the topics that should be considered and how these topics are related [19]. In this perspective, the framework should identify clearly all views one should consider whenever applying a BPR implementation project. So, a framework is not a model of a business process. It is rather an explicit set of ideas that helps in thinking about the business process in the context of reengineering. We will now explore and discuss several frameworks and business process analysis models that are available in the literature. Table 1 A survey of best practices in business process redesign Framework elements Rule name Impact on BP Limits Referred to by Technique used Tool availability Application examples Customers Control relocation Contact reduction Quality. cost Time. quality. cost Unknown Unknown Klein [35] Hammer and Champy [6] Buzacott [36] Guideline Guideline Conditions on when to reduce contact or not None None Queuing model Pacific Bell Ford's accounts payable departments reduced number of clerk's from 500 to 125 (from three points of contact to two) None Integration Products Operation None Order types Task elimination Time. flexibility. cost Unknown None \ Time. \ quality. \ cost. \ flexibility None None Time. flexibility. cost. Unknown Klein [35] Peppard and Rowland [32] None Hammer and Champy [6] Rupp and Russell [39] Peppard and Rowland [32] Berg and Pottjewijd [38] Peppard and Rowland [32] Berg and Pottjewijd [38] Guideline Guideline None Guideline Guideline Guideline + notion of runners, repeaters and strangers to distinguish process variants Guideline Guidelines Guideline None None None None None None None None None None Individual (customers carry trays and clear away in fast foods) or a customer organization (Baxter health-care integrated their organization with their customer by just-in-time provision of a hospital equipment) None IBM credit, three versions of the credit insurance process: performed by computer, by a deal struc-turer, with support of specialist advisers None None Furniture factory distinguishes separate supporting chair-making process Transportation, movement and motion (a high-tech company found out that its semi-conductors traveled 150 000 miles during their transformation) Controls through which all orders pass, physical transport of information Table 1 {continued) Framework Rule Impact Limits Referred Technique Tool Application elements name on BP to by used availability examples Van der Aalst Guideline + None Monitoring tasks, iterations and Van Hee examples [41] Buzacott [36] Illustration on Queuing Illustration of the quantitative effect of eliminating an example. model iterations on a simple example Castano et al. Guideline + ARTEMIS Entity-based similarity coefficient to evaluate the [40] example + methodology degree of similarities between activities tool framework Order-based \ Time. Unknown Own experi- Guideline None Removal of batch processing and periodic activities work /* cost ence when possible Triage /■ Quality, Too much Klein [35] Guideline None None \ time, specialization \ cost, may have in- \ flexibility verted effects Berg and Guideline None Example of triage in times of peak demand Pottjewijd [38] Van der Aalst Guideline None None and Van Hee [41] Zapf and Specific for Simulation Tests two "triage" configurations to decide which Heinzl [42] Call Center one results in better performance results Organizations. Dewan et al. An approach Extension of Applicable to administrative processes with rela- [43] for the integra- PERT/CPM tively stable task structures, such as order fulfill- tion of tasks. approaches ment by mail order distributors, mortgage process- Discussion ing, medical billing or configuration management in of optimality large scale engineering design projects of applying integration in a process network on cycle-time and cost. Model limited to fixed delays between tasks Task com- \ Time, Too large Hammer and Guideline None An electronic company compressed responsibilities position /* quality, tasks may Champy [6] for the various steps or the order fulfillment process \ cost, have inverted resulting in tasks combined into one task executed \ flexibility results. by a so-called "customer service representative" Behavioral view Resequencing Parallelism Time, cost Time, may cost. flexibility, quality Unknown Unknown Rupp and Russell [39] Peppard and Rowland [32] Berg and Pottjewijd [38] Reijers and Goverde [44] Van der Aalst [45] Van der Aalst and Van Hee [41] Buzacott [36] Seidmann and Sundararajan [43] Klein [35] Rupp and Russell [39] Berg and Pottjewijd [38] Guideline Guideline Guideline Guideline Conditions based on ratios to define when to combine two subsequent tasks Guideline The desirability of combining several tasks into one. depends critically on the processing time variability and on the arrival variability Guidance on the effect of task asymmetry on the optimality of the process redesign Guideline Guideline Guideline None None None None A heuristic None None Applicability: situations with large number of tasks and limited need for adapting information systems because of composition None None None Queuing models None None Queuing theory and tendency graphs. None None None None Automated kiosks in Disney theme parks None In a stylized business process. The end controls are parallelized Table 1 (continued) Framework elements Rule name Impact on BP Limits Referred to by Technique used Tool availability Application examples Knock-out Organization: structure Exception Order assignment Time, cost Time. quality. flexibility Time. quality. flexibility Unknown Van der Aalst and Van Hee [41] Buzacott [36] Van der Aalst [45] Van der Aalst [45] Poyssick and Hannaford [46] Hammer and Champy [6] Rupp and Russell [44] Hammer and Champy [6] Reijers and Goverde [44] Van der Aalst and Van Hee [41] Guideline Parallel processing is not necessarily clearly superior unless individual jobs spend in the system is the dominant criterion A set of conditions under which putting two subsequent tasks in parallel have a positive effect Rules on how to order tasks when knock-out processes are considered Guideline Guideline Guideline Guideline Guideline Guideline None Queuing models. None None A heuristic. None A heuristic. None None None None None None None None None None Bell Atlantic assigned a case team to establish high-speed, digital circuits for business customers None None Flexible assignment Centralization Split responsibilities Customer teams Numerical involvement Case manager \ Queue time. /* quality. \ flexibility /* Flexibility. \ time. /* cost. /* Time. /* quality. \ flexibility Cost, time. flexibility, quality Time. cost. quality Unknown Unknown Unknown /• Quality and customer satisfaction. /• cost Unknown Van der Aalst and Van Hee [41] Van der Aalst and Van Hee [38] Rupp and Russell [39] Berg and Pottjewijd [38] Peppard and Rowland [32] Hammer and Champy [6] Berg and Pottjewijd [38] Hammer and Champy [6] Rupp and Russell [39] Berg and Pottjewijd [38] Hammer and Champy [6] Van der Aalst and Van Hee [41] Buzacott [36] Guideline Guideline Guideline Guideline Guideline Guideline Guideline Imagine what happens if only one person makes the job and add additional resources if appears necessary Guideline Guideline Guideline Guideline Provides conditions for which the role of the case manager is None Workflow management systems None None None None None None None None None None Queuing models None None None None Microsoft (10 000 employees) still works in teams of no more than 200 people despite information flowing problems Hallmark, integrated teams for the development of a new line of cards None Who is needed for the handling of an insurance claim? None None Duke Power Company (public utility) where case managers present customers with the useful fiction of an integrated customer service process None None justified Table 1 (continued) Framework elements Rule name Impact on BP Limits Referred to by Technique used Tool availability Application examples Organization: population Extra resources Time. flexibility. cost Specialist -generalist \ Time (specialist). /• flexibility (generalist) Unknown Empower Time. quality. cost Berg and Pottjewijd [38] Van Hee et al. [47] Poyssick and Hannaford [38] Berg and Pottjewijd [38] Rupp and Russell [39] Seidmann and Sundararajan [25] Hammer and Champy [6] Buzacott [36] Poyssick and Hannaford [46] Increase capacity if possible, but not if it only moves the bottleneck Discussion of the optimal-ity of several strategies to optimally allocate additional resources in a business process Guideline Guideline Guideline Guidance on the effect of knowledge intensity on the optimality of the process redesign Guideline Provides guidelines on efficiency of centralized/decentralized systems None Algorithms None Example of a telephone operator company None None None Queuing systems and tendency graphs None Formal models of centralized/decentralized systems None None None None IBM credit. Specialist jobs such as credit checker and pricer were combined into a single position "deal structurer" None Control addition Time. quality. cost Unknown Information Technology Buffering Task automation Time. cost Time. quality. flexibility Unknown Integral Business Process Technology /• Quality \ cost. \time Unknown Rupp and Russell [39] Seidmann and Sundararajan [25] Own experience Poyssick and Hannaford [46] Buzacott [36] Hammer and Champy [6] Own experience Peppard and Rowland [32] Hammer and Champy [6] Berg and Pottjewijd [38] Klein [35] Hammer and Champy [6] Peppard and Rowland [32] Berg and Pottjewijd [38] Van der Aalst and Van Hee [41] Guideline Qualitative discussion on impact of control cost on delegating work or not Guideline Guideline Rules on where it is best to check. Guideline Guideline Rules of thumb for greater success in automation. Guideline Guideline Guideline A chapter with examples on the enabling role of IT Guideline Guideline Guideline None None None None None None Queuing model None None None None I & T I & T None None i.e. workflow packages None None None Taco Bell eliminated some supervisory layers to give more responsibility to restaurants managers leading to a new job category the Market manager None Telephone-based businesses. Nissan uses a rule of thumb of not automating dirty, difficult or dangerous tasks Taco Bell: the Taco-making machine None Loews corporation (chain of movie theatres) introduced Telefilm and Teleticket services Shared databases, expert systems, telecommunications networks, etc. None None Computerization of documents H.A. Reijers, S. Liman Mansar I Omega 33 (2005) 283-306 it fr o o ř- 11 H 3 3 2 o p. E p ■3 E 11 p o f- p o Z o E ni p tu -p o g s p o Z p O Z I p! SO E p o Z p o Z p o Z p o Z SO o "o s o a 3 o 5 O lil O a- -p Customers Products Business Process Participants Information Technology Fig. 1. The WCA framework of Alter [19]. CIMOSA, a business process-centered method for enterprise modeling distinguishes three modeling levels [20]: • the requirements definition level: to represent the voice of the users, i.e. what is needed as expressed in a detailed and unambiguous way in user-oriented language; • the design specification level: to formally define one or more solutions satisfying the set of requirements and to analyze their properties and to select the "best" one; • the implementation description: to state in detail the implementation solution taking into account technical physical constraints. It is clear, according to this classification and the nature of BPR, that the business process framework we need is on the design specification level. Alter [19] suggests the use of the so-called work-centered analysis framework (WCA). It consists of six linked elements, the internal or external customers of the business process, the products (or services) generated by the business process, the steps in the business process, the participants in the business process, the information the business process uses or creates and finally the technology the business process uses. Fig. 1 shows the links between these elements. This framework appears to be relevant for our purpose because it dissociates the structure of the business process from the other "components" of a business process: the participants, the information and the technology. Indeed, as stated by Grant [21] it is a narrow view to only consider processes when depicting BPR; other important aspects of institutions are also organizational structure, people, communication and technology. The danger of adopting too narrow a view is that it misdirects developers to focus exclusively on processes while ignoring a variety of other possible reengi-neering opportunities that may result from a wider view. A second argument for the relevance of such a framework for our purpose is the emphasis on technology as a separate part of the business process. In their paper, Gunasekaran and Nath [22] describe the advantages of integrating IT in BPR to improve the performance of manufacturing/service H.A. Reijers, S. Liman Mansar I Omega 33 (2005) 283-306 293 companies. They also list suggestions on how technology could be used to reengineer the business process. Anyway, the fundamental idea here is that it is advised to keep in mind what kind of IT is available and in which way it could help improve the process. Finally, Alter's framework is consistent with the CIMOSA standard enterprise modeling views: CIMOSA recommends to consider a function view that addresses the enterprise functionality (i.e. what has to be done) and the enterprise behavior (i.e. in which order work has to be done), an information view (i.e. what are the objects to be processed to be used), a resource view (i.e. who /what does what) and an organization view (i.e. organization entities and their relationships, who is responsible of what or whom, who has authority on what, people empowerment, etc.). Compared to Alter's framework, it is clear that the difference with CIMOSA views is in the "Technology" dimension, as it is not mentioned in CIMOSA. Another framework has been presented by Jablonski and Bussler [23] in the context of workflow management. Van der Aalst and Berens [24] see a workflow as a specific type of business process: it delivers services or informational products. Jablonski and Bussler provide the MOBILE model for workflows, which is split into two categories of perspectives: the factual perspectives and the systemic perspectives. The former determine the contents of a workflow model and the latter the enactment of workflow descriptions. We are obviously interested in the factual perspectives of the MOBILE workflow model. Essentially five perspectives are described: • the function perspective: what has to be executed?, • the operation perspective: how is a workflow operation implemented?, • the behavior perspective: when is a workflow executed?, • the information perspective: what data are consumed and produced?. • the organization perspective: who has to execute a workflow or a workflow application? The operation and the behavior perspectives can be considered as a more detailed view of the business process as it is defined in Alter's WCA framework. Moreover, the authors distinguish in the organization perspective (comparable to "participants") two parts, the organization structure (elements: roles, users, groups, departments, etc.) and the organization population (individuals: agents which can have tasks assigned for execution and relationships between them), which clarifies the participants dimension. Seidmann and Sundarajan [25] have worked on the effects of some best practices on workflow redesign. In this context they have developed a process description based on four classes of parameters: • work system details, including the sequencing of tasks, the task consolidation and the scheduling of jobs. EXTERNAL ENVIRONMENT Customers Products 1 Business process Operation view Behavioural view Organisation -Structure -Population Information 5* Technology Fig. 2. Final framework for BPR. • job details, including the number of tasks in a job, the relative size of tasks, the nature of tasks and the degree of customization. • administrative variables, including the decision rights, the performance measures and the compensation schemes. • information and technology variables, including the knowledge intensity, the information symmetry and the information sharing. The first two classes of parameters are sensibly close to the operation and behavior perspectives described by Jablonski and Bussler [23]. The third class is related to human resources management and the last class is related to the technology dimension as mentioned in the WCA framework of Alter [19]. Seidmann and Sundarajan [25] do not add any new view to the business process redesign framework. However, they use and describe detailed parameters that are worth to be considered in a BPR effort. So finally, in the context of BPR, the extended framework of Fig. 2 is derived as a synthesis of the WCA framework [19], the MOBILE workflow model [23], the CIMOSA enterprise modeling views [20] and the process description classes of Seidmann and Sundarajan [25]. In this framework, six elements are linked: • the internal or external customers of the business process, • the products (or services) generated by the business process, • the business process with two views, (a) the operation view: how is a workflow operation implemented? (number of tasks in a job, relative size of tasks, nature of tasks, degree of customization), and 294 H.A. Reijers, S. Liman Mansar I Omega 33 (2005) 283-306 (b) the behavior view, when is a workflow executed? (sequencing of tasks, task consolidation, scheduling of jobs, etc.), • the participants in the business process considering: (a) the organization structure (elements: roles, users, groups, departments, etc.) and (b) the organization population (individuals: agents which can have tasks assigned for execution and relationships between them), • the information the business process uses or creates, • the technology the business process uses and finally, • the external environment other than the customers. This framework will be used to classify the best practices for BPR that we will identify in Section four. But prior to that we present in the following an evaluation framework that helps assessing the effects of the best practices on the redesigned business process. 3. Evaluation framework Brand and Van der Kolk [11] distinguish four main dimensions in the effects of redesign measures: time, cost, quality and flexibility. Ideally, a redesign of a business process decreases the time required to handle an order, it decreases the required cost of executing the business process, it improves the quality of the service delivered and it improves the ability of the business process to react to variation. The attractive property of their model is that, in general, improving upon one dimension may have a weakening effect on another. For example, reconciliation tasks may be added in a business process to improve on the quality of the delivered service, but this may have a drawback on the timeliness of the service delivery. To signify the difficult trade-offs that sometimes have to be made they refer to their model as the devil's quadrangle. It is depicted in Fig. 3. Awareness of the trade-off that underlies a redesign measure is very important in the redesign of a business process. Sometimes, the effect of a redesign measure may be that the result from some point of view is worse than the existing business process. Also, the application of several best practices may result in the (partly) neutralization of the desired effects of each of the single measures. Each of the four dimensions of the devil's quadrangle may be made operational in different ways. For example, there are several types of cost and even so many directions to focus on when attempting to decrease cost. The translation of the general concepts time, cost, quality and flexibility to a more precise meaning is context sensitive. The key performance indicators of an organization or—more directly—the performance targets formulated for a redesign effort should ideally be formulated as much more precise applications of the four named dimensions. In our discussion of the effects of redesign measures we will not try to assess their effectiveness in every thinkable Quality Time Flixibility Fig. 3. The devil's quadrangle. aspect of each of the four dimensions. We will focus on some common and straightforward interpretations. 4. Best practices Over the last 20 years, best practices have been collected and applied in various areas, such as business planning, healthcare, manufacturing and the software development process (e.g. [26-29]). Although an ideal best practice prescribes the best way to treat a particular problem that can be replicated in any situation or setting, it is more fruitful to see it as something that "needs to be adapted in skilfull ways in response to prevailing conditions" [27]. In this section we describe such best practices, which can actually support the redesigner of a business process in facing the technical BPR challenge: the implementation of an improved process design. The presentation of these best practices especially aims at BPR efforts where an existing business process is taken as basis for its redesign. A best practice can then be applied locally to boost the overall performance. Taking the existing process as starting point contrasts sharply with the so-called clean-sheet approaches, i.e., where the process is designed from scratch. There is considerable discussion in literature on the choice between these alternatives (see, e.g. [30]), but taking the existing process as a starting point is in practice the most common way of developing a new business process, as observed e.g. by Aldowaisan and Gaafar [31]. The presented best practices in this paper are often derived from experience gained within large companies or by consultancy firms in BPR engagements. For example, the H.A. Reijers, S. Liman Mansar I Omega 33 (2005) 283-306 295 best practices as proposed by Peppard and Rowland [32] are derived from experiences of the Toyota company. It should be noted that many of the best practices lack an adequate (quantitative) support, as observed by, e.g. Van der Aalst [33]. Not every best practice that we encountered in our literature survey is incorporated in this overview. Some of them proved to be more on the strategic level, e.g. on the selection of products to be offered by a company, or were thought to be of very limited general application, e.g. they were specific for a certain industry. The presented best practices are universal in the sense that they are applicable within the context of any business process, regardless of the product or service delivered. Improving a process can concern any of the components of the framework we adopted in Section 2. Thus, we classify the best practices in a way that respects the framework we have adopted. We identify best practices that are oriented towards: • Customers, which focus on improving contacts with customers. • Business process operation, which focus on how to implement the workflow, • Business process behavior, which focus on when the workflow is executed, • Organization, which considers both the structure of the organization (mostly the allocation of resources) and the resources involved (types and number). • Information, which describes best practices related to the information the business process uses, creates, may use or may create. • Technology, which describes best practices related to the technology the business process uses or may use. • External environment, which try to improve upon the collaboration and communication with the third parties Note that this distinction is not mutually exclusive. Therefore, some best practices could have been assigned to more than one of these classes. From this classification it is clear that product-oriented best practices are not taken into account. This is related to the fact that a redesign focuses on already existing business processes and not on the product to be processed. We believe that the early design of the process is strongly connected to the product, see our earlier paper [34]. Essentially, the paper describes a formal method for deriving a workflow considering the structure of the product. The method is applied in the context of process design based on a clean-sheet approach, where the prior process is not taken into account. In the case of a redesign, the derived workflow can be considered as a first rough process to which the following best practices can be further applied, thus allowing to take into consideration the lessons learnt form the past in the organization. 4.1. Customer 4.1.1. Control relocation: 'move controls towards the customer' Different checks and reconciliation operations that are part of a business process may be moved towards the customer. Klein [35] gives the example of Pacific Bell that moved its billing controls towards its customers eliminating in this way the bulk of its billing errors. It also improved customer's satisfaction. A disadvantage of moving a control towards a customer is higher probability of fraud, resulting in less yield. This best practice is named by Klein [35]. 4.1.2. Contact reduction: 'reduce the number of contacts with customers and third parties' The exchange of information with a customer or third party is always time-consuming. Especially when information exchanges take place by regular mail, substantial wait times may be involved. Also, each contact introduces the possibility of intruding an error. Hammer and Champy [6] describe a case where the multitude of bills, invoices and receipts creates a heavy reconciliation burden. Reducing the number of contacts may therefore decrease throughput time and boost quality. Note that it is not always necessary to skip certain information exchanges, but that it is possible to combine them with limited extra cost. A disadvantage of a smaller number of contacts might be the loss of essential information, which is a quality issue. Combining contacts may result in the delivery or receipt of too much data, which involves cost. This best practice is mentioned by Hammer and Champy [6]. Buzacott [7] has investigated this best practice quantitatively. 4.1.3. Integration: 'consider the integration with a business process of the customer or a supplier' This best practice can be seen as exploiting the supply-chain concept known in production [37]. The actual application of this best practice may take on different forms. For example, when two parties have to agree upon a product they jointly produce, it may be more efficient to perform several intermediate reviews than performing one large review after both parties have completed their part. In general, integrated business processes should render a more efficient execution, both from a time and cost perspective. The drawback of integration is that mutual dependence grows and, therefore, flexibility may decrease. Both Klein [35] and Peppard and Rowland [32] mention this best practice (Fig. 4). 4.1.4. Evaluation Using the evaluation framework as introduced in Section 3, a summary of the general effects of the three customer best practices can be seen in Fig. 5. 296 H.A. Reijers, S. Liman Mansar I Omega 33 (2005) 283-306 Internal process 1 2 3 I I 1 2 3 Client/supplier Fig. 4. Integration. Control relocation Contact reduction Integration Flexibility Fig. 5. Evaluation of customer best practices. The gray square represents a neutral effect on all four distinguished dimensions. The effects of a best practice are represented by the other polygons. A positive (negative) effect of a best practice on a specific dimension is signified by its corner extending beyond (staying within) the neutral square. For example, the integration best practice has positive effects on the cost and time dimensions (i.e. it reduces cost and time), a negative effect on the flexibility (i.e. it reduces flexibility) and a neutral effect on the quality. All depicted effects are scored on a relative scale. 4.2. Business process operation 4.2.1. Order types: 'determine whether tasks are related to the same type of order and, if necessary, distinguish new business processes' Especially Berg and Pottjewijd [38] convincingly warn for parts of business processes that are not specific for the 1 V X 3 Fig. 6. Task elimination. business process they are part of. Ignoring this phenomenon may result in a less effective management of this 'subflow' and a lower efficiency. Applying this best practice may yield faster processing times and less cost. Also, distinguishing common subflows of many different flows may yield efficiency gains. Yet, it may also result in more coordination problems between the business process (quality) and less possibilities for rearranging the business process as a whole (flexibility). This best practice has been mentioned in one form or another by Hammer and Champy [6], Rupp and Russell [39], Peppard and Rowland [32] and Berg and Pottjewijd [38]. 4.2.2. Task elimination: 'eliminate unnecessary tasks from a business process' A common way of regarding a task as unnecessary is when it adds no value from a customer's point of view. Typically, control tasks in a business process do not do this; they are incorporated in the model to fix problems created (or not elevated) in earlier steps. Control tasks are often identified by iterations. Tasks redundancy can also be considered as a specific case of task elimination (Fig. 6). In order to identify redundant tasks, Castano et al. [40] have developed entity-based similarity coefficients. They help automatically checking the degree of similarities between tasks (or activities). The aims of this best practice are to increase the speed of processing and to reduce the cost of handling an order. An important drawback may be that the quality of the service deteriorates. This best practice is widespread in literature, for example, see Peppard and Rowland [32] Berg and Pottjewijd [38] and Van der Aalst and Van Hee [41]. Buzacott [36] illustrates the quantitative effects of eliminating iterations with a simple model. 4.2.3. Order-based work: 'consider removing batch-processing and periodic activities from a business process' Some notable examples of disturbances in handling a single order are: (a) its piling up in a batch and (b) periodic activities, e.g. because processing depends on a computer system that is only available at specific times. Getting rid of these constraints may significantly speed up the handling of individual orders. On the other hand, efficiencies of scale can be reached by batch processing. Also, the cost of making information systems permanently available may be costly. H.A. Reijers, S. Liman Mansar I Omega 33 (2005) 283-306 297 Fig. 8. Task composition. Fig. 7. Triage. This best practice results from our own reengineering experience. 4.2.4. Triage: 'consider the division of a general task into two or more alternative tasks' or 'consider the integration of two or more alternative tasks into one general task' When applying this best practice in its first and most popular form, it is possible to design tasks that are better aligned with the capabilities of resources and the characteristics of the orders being processed (Fig. 7). Both interpretations improve upon the quality of the business process. Distinguishing alternative tasks also facilitates a better utilization of resources, with obvious cost and time advantages. On the other hand, too much specialization can make processes become less flexible, less efficient, and cause monotonous work with repercussions for quality. An alternative form of the triage best practice is to divide a task into similar instead of alternative tasks for different subcategories of the orders being processed. For example, a special cash desk may be set up for customers with an expected low processing time. Note that this best practice is in some sense similar to the order types best practice we mentioned in this section. The main interpretation of the triage concept can be seen as a translation of the order type best practice on a task level. The triage concept is mentioned by Klein [35], Berg and Pottjewijd [38] and Van der Aalst and Van Hee [41]. Zapf and Heinzl [42] show the positive effects of triage within the setting of a call center. Dewan et al. [43] study the impact of the triage on the organization in terms of cycle-time reduction. 4.2.5. Task composition: 'combine small tasks into composite tasks and divide large tasks into workable smaller tasks' Combining tasks should result in the reduction of setup times, i.e., the time that is spent by a resource to become familiar with the specifics of a order. By executing a large task which used to consist of several smaller ones, some positive effect may also be expected on the quality of the delivered work. On the other hand, making tasks too large may result in (a) smaller run-time flexibility and (b) lower quality as tasks become unworkable. Both effects are exactly countered by dividing tasks into smaller ones. Obviously, smaller tasks may also result in longer setup times (Fig. 8). Quality --Case types ------ Task elimination - Case-based work Flexibility Fig. 9. Evaluation of business process operation best practices (I). This best practice is related to the triage best practice in the sense that they both are concerned with the division and combination of tasks. It is probably the most cited best practice, mentioned by Hammer and Champy [6], Rupp and Russell [39], Peppard and Rowland [32], Berg and Pottjewijd [38], Seidmann and Sundararajan [25], Reijers and Goverde [44], Van der Aalst [45] and Van der Aalst and Van Hee [41]. Some of these authors only consider one part of this best practice, e.g. combining smaller tasks into one. Buzacott [36], Seidmann and Sundararajan [25] and Van der Aalst [45] provide quantitative support for the optimality of this best practice for simple models. 4.2.6. Evaluation The assessment of the best practices that aim at the business process operation is summarized in Figs. 9 and 10. The meaning of the shapes in these figures is similar to that of the shapes in Fig. 5. 4.3. Business process behavior 4.3.1. Resequencing: 'move tasks to more appropriate places' In existing business processes, actual tasks orderings do not reveal the necessary dependencies between tasks 298 H.A. Reijers, S. Liman Mansarl Omega 33 (2005) 283-306 Triage Task composition (larger tasks) Flexibility Fig. 10. Evaluation of business process operation best practices (II). 3 1 2 Fig. 11. Resequencing. (Fig. 11). Sometimes it is better to postpone a task if it is not required for immediately following tasks, so that perhaps its execution may prove to become superfluous. This saves cost. Also, a task may be moved into the proximity of a similar task, in this way diminishing setup times. The resequencing best practice is mentioned as such by Klein [35]. It is also known as 'process order optimization'. 4.3.2. Knock-out: 'order knock-outs in an increasing order of effort and in a decreasing order of termination probability' A typical part of a business process is the checking of various conditions that must be satisfied to deliver a positive end result. Any condition that is not met may lead to a termination of that part of the business process: the knock-out (Fig. 12). If there is freedom in choosing the order in which the various conditions are checked, the condition that has the most favorable ratio of expected knock-out probability versus the expected effort to check the condition should be pursued. Next, the second best condition, etc. This way of ordering checks yields on average the least costly business process execution. There is no obvious drawback on this best practice, although it may not always be possible to freely order these kinds of checks. Also, implementing this best practice may result in a (part of a) business process that Fig. 13. Parallelism. takes a longer throughput time than a full parallel checking of all conditions. The knock-out best practice is a specific form of the resequencing best practice. Van der Aalst [45] mentions this best practice and also gives quantitative support for its optimality. 4.3.3. Parallelism: 'consider whether tasks may be executed in parallel' The obvious effect of putting tasks in parallel is that the throughput time may be considerably reduced (Fig. 13). The applicability of this best practice in business process redesign is large. In practical experiences we have had with analyzing existing business process, tasks were mostly ordered sequentially without the existence of hard logical restrictions prescribing such an order. A drawback of introducing more parallelism in a business process that incorporates possibilities of knock-outs is that the cost of business process execution may increase. Also, the management of business processes with concurrent behavior can become more complex, which may introduce errors (quality) or restrict run-time adaptations (flexibility). The parallelism best practice is a specific form of the resequencing best practice we mentioned at the start of this section. It is mentioned by Rupp and Russell [39], Buzacott [36], Berg and Pottjewijd [38] and Van der Aalst and Van Hee [41]. Van der Aalst [45] provides quantitative support for this best practice. 4.3.4. Exception: 'design business processes for typical orders and isolate exceptional orders from normal flow' Exceptions may seriously disturb normal operations. An exception, will require workers to get acquainted with the specifics of the exception, although they may not be able to handle it. Setup times are then wasted. Isolating exceptions, for example by a triage, will make the handling of normal orders more efficient. Isolating exceptions may possibly increase the overall performance as specific expertise can be H.A. Reijers, S. Liman Mansar I Omega 33 (2005) 283-306 299 Cost Quality -------- Time --Resequencing ------ Parellellism - Knock-out ■ - - Exception Flexibility Fig. 14. Evaluation of business process behavior best practices. ist 1 2 3 Fig. 15. Order assignment. build up by workers working on the exceptions. The price paid is that the business process will become more complex, possibly decreasing its flexibility. Also, if no special knowledge is developed to handle the exceptions (which is costly) no major improvements are likely to occur. This best practice is mentioned by Poyssick and Han-naford [46] and Hammer and Champy [6]. 4.3.5. Evaluation The assessment of the best practices that target the behavior of the business process can be seen in Fig. 14. The meaning of the shapes in these figures is similar to that of the shapes in Fig. 5. 4.4. Organization 4.4.1. Structure 4.4.1.1. Order assignment: 'let workers perform as many steps as possible for single orders' (see Fig. 15). By using order assignment in the most extreme form, for each task execution the resource is selected from the ones capable of performing it that has worked on the order before—if any. The obvious advantage of this best practice is that this person will get acquainted with the case and will need less setup time. An additional benefit may be that the quality of service is increased. On the negative side, the flexibility of resource allocation is seriously reduced. The execution of an order may experience substantial queue time when the person to whom it is assigned is not available. The order assignment best practice is described by Rupp and Russell [39], Hammer and Champy [6], Reijers and Goverde [44] and Van der Aalst and Van Hee [41]. 4.4.1.2. Flexible assignment: 'assign resources in such a way that maximal flexibility is preserved for the near future'. For example, if a task can be executed by either of two available resources, assign it to the most specialized resource. In this way, the possibilities to have the free, more general resource execute another task are maximal. The advantage of this best practice is that the overall queue time is reduced: it is less probable that the execution of an order has to await the availability of a specific resource. Another advantage is that the workers with the highest specialization can be expected to take on most of the work, which may result in a higher quality. The disadvantages of this best practice can be diverse. For example, work load may become unbalanced resulting in less job satisfaction. Also, possibilities for specialists to evolve into generalists are reduced. This best practice is mentioned by Van der Aalst and Van Hee [41]. 4.4.1.3. Centralization: 'treat geographically dispersed resources as if they are centralized'. This best practice is explicitly aimed at exploiting the benefits of a Workflow Management System or WfMS for short [23]. After all, when a WfMS takes care of assigning work to resources it has become less relevant where these resources are located geographically. In this sense, this best practice is a special form of the integral technology best practice (see Section 4.6). The specific advantage of this measure is that resources can be committed more flexibly, which gives a better utilization and possibly a better throughput time. The disadvantages are similar to that of the integral technology best practice. This best practice is mentioned by Van der Aalst and Van Hee [41]. 4.4.1.4. Split responsibilities: 'avoid assignment of task responsibilities to people from different functional units' (see Fig. 16). The idea behind this best practice is that tasks for which different departments share responsibility are more likely to be a source of neglect and conflict. Reducing the overlap in responsibilities should lead to a better quality of task execution. Also, a higher responsiveness to available work may be developed so that customers are served quicker. On the other hand, reducing the effective number of resources that is available for a work item may have a negative effect on its throughput time, as more queuing may occur. This specific best practice is mentioned by Rupp and Russell [39] and Berg and Pottjewijd [38]. 300 H.A. Reijers, S. Liman Mansar I Omega 33 (2005) 283-306 M I M I M 1 2 3 Fig. 16. Split responsibilities. 1 2 3 Fig. 17. Numerical involvement. 4.4.1.5. Customer teams: 'consider assigning teams out of different departmental workers that will take care of the complete handling of specific sorts of orders'. This best practice is a variation of the order assignment best practice. Depending on its exact desired form, the customer team best practice may be implemented by the order assignment best practice. Also, a customer team may involve more workers with the same qualifications, in this way relaxing the strict requirements of the order assignment best practice. Advantages and disadvantages are similar to those of the order assignment best practices. In addition, work as a team may improve the attractiveness of the work and a better understanding, which are both quality aspects. This best practice is mentioned by Peppard and Rowland [32], Hammer and Champy [6] and Berg and Pottjewijd [38]. 4.4.1.6. Numerical involvement: 'minimize the number of departments, groups and persons involved in a business process' (see Fig. 17). Applying this best practice should lead to less coordination problems. Less time spent of coordination makes more time available for the processing of orders. Reducing the number of departments may lead to less split responsibilities, with similar pros and cons as the split responsibilities best practice. In addition, smaller numbers of specialized units may prohibit the build of expertise (a quality issue) and routine (a cost issue). This best practice is described by Hammer and Champy [6], Rupp and Russell [39] and Berg and Pottjewijd [38]. 4.4.1.7. Case manager: 'appoint one person as responsible for the handling of each type of order, the case manager'. The case manager is responsible for a specific order or customer, but he or she is not necessarily the (only) resource that will work on it. The difference with the order assignment practice is that the emphasis is on management of the process and not on its execution. The most important aim of the best practice is to improve upon the external quality of a business process. The business process will become more transparent from the viewpoint of a customer as the case manager provides a single point of contact. This positively affects customer satisfaction. It may also have a positive effect on the internal quality of the business process, as someone is accountable for correcting mistakes. Obviously, the assignment of a case manager has financial consequences as capacity must be devoted to this job. This best practice is mentioned by Hammer and Champy [6] and Van der Aalst and Van Hee [41]. Buzacott [36] has provided some quantitative support for a specific interpretation of this best practice. 4.4.1.8. Evaluation. The assessment of the best practices for the structure of the organization is depicted in Figs. 18 and 19. 4.4.2. Population Extra resources: 'if capacity is not sufficient, consider increasing the number of resources' (see Fig. 20). This straightforward best practice speaks for itself. The obvious effect of extra resources is that there is more capacity for handling orders, in this way reducing queue time. It may also help to implement a more flexible assignment policy. Of course, hiring or buying extra resources has its cost. Note the contrast of this best practice with the numerical involvement best practice. This best practice is mentioned by Berg and Pottjewijd [38]. Van Hee et al. [47] discuss the optimality of several strategies to optimally allocate additional resources in a business process. 4.4.2.1. Specialist-generalist: 'consider to make resources more specialized or more generalist' (see Fig. 21). Resources may be turned from specialists into generalists or the other way round. A specialist resource can be trained for other qualifications; a generalist may be assigned to the same type of work for a longer period of time, so that his other qualifications become obsolete. When the redesign of a new business process is considered, application of this best practice comes down to considering the specialist-generalist ratio of new hires. A specialist builds up routine more quickly and may have a more profound knowledge than a generalist. As a result he or she works quicker and delivers higher quality. On the other hand, the availability of generalists adds more flexibility to the business process and can lead to a better utilization of resources. Depending on the degree of specialization or generalization, either type of resource may be more costly. H.A. Reijers, S. Liman Mansar I Omega 33 (2005) 283-306 301 Quality -- Case assignment ------ Flexible assignment - Centralization ■ ■ ■ Split responsibilities Flexibility Fig. 18. Evaluation of organization structure best practices (I). --Customer teams Numerical involvement Flexibility - Case manager Fig. 19. Evaluation of organization structure best practices (II). jujLL i^Et" Ji_ljLL 1 2 3 Fig. 20. Extra resources. Note that this best practice differs from the triage concept in the sense that the focus is not on the division of tasks. Fig. 21. Specialist-generalist. Fig. 22. Empower. Poyssick and Hannaford [4] and Berg and Pottjewijd [38] stress the advantages of generalists. Rupp and Russell [39], Seidmann and Sundararajan [25] mention both specialists and generalists. 4.4.2.2. Empower: 'give workers most of the decisionmaking authority and reduce middle management'. In traditional business processes, substantial time may be spent on authorizing work that has been done by others. When workers are empowered to take decisions independently, it may result in smoother operations with lower throughput times. The reduction of middle management from the business process also reduces the labor cost spent on the processing of orders. A drawback may be that the quality of the decisions is lower and that obvious errors are no longer found. If bad decisions or errors result in rework, the cost of handling a order may actually increase compared to the original situation (Fig. 22). This best practice is named by Hammer and Champy [6], Rupp and Russell [39], Seidmann and Sundarajan [25] and Poyssick and Hannaford [4]. Buzacott [36] shows with a simple quantitative model that this best practice may indeed increase performance. 4.4.2.3. Evaluation. The assessment of the best practices for the organization population is given in Fig. 23. Note that only one of the two interpretations of the specialist-generalist best practice is shown in Fig. 23. 4.5. Information 4.5.1. Control addition: 'check the completeness and correctness of incoming materials and check the output before it is send to customers' This best practice promotes the addition of controls to a business process. It may lead to a higher quality of the business process execution and, as a result, to less required rework (Fig. 24). Obviously, an additional control will require time and will absorb resources. Note the contrast of the intent of this best practice with that of the task elimi- 302 H.A. Reijers, S. Liman Mansar I Omega 33 (2005) 283-306 Quality --Extra resources Specialist-generalist (more specialists) Flexibility - Empower Fig. 23. Evaluation of organization population best practices. Quality > / / . / A / / / / / / / / Cost // / \ \ N\ \ \ \ \ \ v Time s ( N \ X \ X \ N \ X\ / / y / / / / / / --Control addition - Buffering Flexibility Fig. 26. Evaluation of information best practices. 1 2 r I I Fig. 24. Control addition. Fig. 25. Buffering. nation best practice, which is a business process operation best practice (see Section 4.2). This best practice is mentioned by Poyssick and Han-naford [4], Hammer and Champy [6] and Buzacott [36]. 4.5.2. Buffering: 'instead of requesting information from an external source, buffer it by subscribing to updates' Obtaining information from other parties is a major time-consuming part in many business process (Fig. 25). By having information directly available when it is required, throughput times may be substantially reduced. This best practice can be compared to the caching principle microprocessors apply. Of course, the subscription fee for information updates may be rather costly. This is especially so when we consider information sources that contain far more information than is ever used. Substantial cost may also be involved with storing all the information. Note that this best practice is a weak form of the integration best practice (see Section 4.1). Instead of direct access to the original source of information—which the inte- gration with a third party may come down to—a copy is maintained. This best practice follows from our own reengineering experience. 4.5.2.1. Evaluation. A summary of the effects of the information best practices is given in Fig. 26. 4.6. Technology 4.6.1. Task automation: 'consider automating tasks' A particular positive result of automating tasks may be that tasks can be executed faster, with less cost, and with a better result. An obvious disadvantage is that the development of a system that performs a task may be very costly. Generally speaking, a system performing a task is also less flexible in handling variations than a human resource. Instead of fully automating a task, an automated support of the resource executing the task may also be considered. A significant application of the task automation best practice is the business process perspective of e-commerce: As cited by Gunasekaran et al. [48] and defined by Kalakota and Whinston [49] e-commerce can be seen as the application of technology towards the automation of business transactions and workflows. This best practice is specifically mentioned as a redesign measure by Peppard and Rowland [32], Hammer and Champy [6] and Berg and Pottjewijd [38]. 4.6.2. Integral technology: 'try to elevate physical constraints in a business process by applying new technology' In general, new technology can offer all kinds of positive effects. For example, the application of a WfMS may H.A. Reijers, S. Liman Mansar I Omega 33 (2005) 283-306 303 --Task automation _ Integral technology (WfMS) Flexibility Fig. 27. Evaluation of technology best practices. result in less time that is spend on logistical tasks. A Document Management System will open up the information available on orders to all participants, which may result in a better quality of service. New technology can also change the traditional way of doing business by giving participants completely new possibilities. The purchase, development, implementation, training and maintenance efforts related to technology are obviously costly. In addition, new technology may arouse fear with workers or may result in other subjective effects; this may decrease the quality of the business process. This best practice is mentioned by Klein [35], Peppard and Rowland [32], Berg and Pottjewijd [38] and Van der Aalst and Van Hee [41]. 4.6.3. Evaluation The discussed effects of both technology best practices can be seen in Fig. 27. Note that to give an idea of the diverse effects of the best practice, the effects of a WfMS have been depicted as an example. 4.7. External environment 4.7.1. Trusted party: 'instead of determining information oneself use results of a trusted party' Some decisions or assessments that are made within business process are not specific for the business process they are part of. Other parties may have determined the same information in another context, which—if it were known —could replace the decision or assessment. An example is the creditworthiness of a customer that bank A wants to establish. If a customer can present a recent creditworthiness certificate of bank B, then bank A will accept it. Obviously, Fig. 28. Interfacing. the trusted party best practice reduces cost and may even cut back throughput time. On the other hand, the quality of the business process becomes dependent upon the quality of some other party's work. Some coordination effort with trusted parties is also likely to be required, which diminishes flexibility. This best practice is different from the buffering best practice (see Section 4.5), because the business process owner is not the one obtaining the information. This best practice results from our own reengineering experience. 4.7.2. Outsourcing: 'consider outsourcing a business process in whole or parts of it' Another party may be more efficient in performing the same work, so it might as well perform it for one's own business process. The obvious aim of outsourcing work is that it will generate less cost. A drawback may be that quality decreases. Outsourcing also requires more coordination efforts and will make the business process more complex. Note that this best practice differs from the trusted party best practice. When outsourcing, a task is executed at run time by another party. The trusted party best practice allows for the use of a result in the (recent) past. This best practice is mentioned by Klein [35], Hammer and Champy [6] and Poyssick and Hannaford [4]. 4.7.3. Interfacing: 'consider a standardized interface with customers and partners' The idea behind this best practice is that a standardized interface will diminish the probability of mistakes, incomplete applications, unintelligible communications, etc. (Fig. 28). A standardized interface may result in less errors (quality), faster processing (time) and less rework (cost). The interfacing best practice can be seen a specific interpretation of the integration best practice, although it is not specifically aimed at customers. This best practice is mentioned by Hammer and Champy [6] and Poyssick and Hannaford [4]. 4.7.4. Evaluation The discussed effects of the external party best practices can be seen in Fig. 29. 304 H.A. Reijers, S. Liman Mansar I Omega 33 (2005) 283-306 Quality Time --Trusted party ------ Outsourcing Flexibility - Interfacing Fig. 29. Evaluation of external party best practices. 5. Conclusion In this paper we have dealt with the best practices in implementing BPR. A framework for classifying the best practices was given. It is the result of the experience of other authors mixed with our own experience. It helps structuring the implementation of a BPR initiative. In this framework we have described 29 best practices and evaluated qualitatively their impact on the cost, quality, flexibility and time criteria as defined by Brand and van der Kolk [11]. In Table 1 we summarize within our framework for BPR the best practices' impact and authors' contribution to their application. Examples from real cases are also given if available. The table shows that the best practices have been applied to several types of industries, ranging from the accounts' department at Ford industries to Fast foods, health sector organizations (Baxter health care), some manufacturing processes (Toyota, a furniture factory, a semi-conductors manufacturer), IBM credit insurance processes and various other administrative processes. Despite this wide range of applications, and our personal experience in applying these rules in service contexts (i.e. invoice processing and purchasing), we believe there is still a need to further investigate the impact and relevance of each best practice to specific industrial segments. One recent approach in this area is by Macintosh [50], who compares private and public sector BPR applications. These applications cover some of the best practices we discussed in this paper (integral business process technology, task automation, contact reduction, etc.). Macintosh [50] does not question the appropriateness of the solutions in the separate industrial segments, but stresses differences with respect to behavioral issues. On another level, Table 1 clearly shows that—except for the contributions of Buzacott [36], Seidmann and Sundara- jan [25], Dewan et al. [43] and Van der Aalst [45] to some best practices—most of the best practices lack the support of an analytical or empirical study. Additional work should point out the conditions or domain validity where a best practice would give the expected results in terms of cost/time reduction or quality/flexibility improvement. This introduces the future research directions for this paper, which are: • At first, to further validate our BPR framework and set of best practices through an extensive survey amongst consultants and through real case studies. The survey aims at further identifying the most used best practices and the framework's elements that are crucial for BPR. Case studies would give further insight into how the framework can be used during a BPR implementation. • Then to investigate for all best practices when, where and how to apply or not apply them. This part is concerned with giving indications to the size of the business process or the tasks involved. Also, it should study the relative impact of best practices on a business process. In this area, Foster [51] assessed the impact of BPR (essentially organizational automation impacts) in a hospital using base lining. Also, Seidmann and Sundarajan [52] studied the popular combination of the empower and the triage best practices (leading to decentralization and task consolidation). They proved, using mathematical models, that this combination is sub-optimal in many cases. • At last, to provide users with a methodology in applying best practices. This includes the classification of the best practices within the framework for BPR implementation as a basis (which was one of the purposes of this paper) and as a guideline to the order/conditions in which the best practices should be implemented. Of course, several authors have already investigated this area. Harrington [53] provided a streamlining of business process redesign rules. Kettinger and Teng [54] provide a framework for analyzing BPR in conjunction with several strategic dimensions. However, these approaches lack the set of measures and guidelines to the application of the best practices. We believe that in presenting in this paper the best practices together with their qualitative assessments we provide support to the practitioner of BPR dealing with the mechanics of the process. The best practices may be used as a checklist with (limited) additional guidance for the application of each of these, so that a favorable redesign of a business process becomes feasible. The checklist should be incorporated within a BPR methodology that addresses managerial aspects as well. One suggestion would be to use the five themes listed by Maull et al. [55] that lead to effective BPR implementation: integrating the business strategy, integrating performance measurement, creating business process architectures, involving human and organizational factors and identifying the role of information and technology. H.A. Reijers, S. Liman Mansar I Omega 33 (2005) 283-306 305 References [1] Manganelli R, Klein M. The reengineering handbook: a step-by-step guide to business transformation. New York: American Management Association; 1994. [2] Carr D, Johansson H. Best practices in reengineering. New York: McGraw-Hill Editions; 1995. [3] Galliers R. Against obliteration: reducing risk in business process change. In: Sauer C, Yetton P, editors. Steps to the future: fresh thinking on the management of IT-based organizational transformation. San Francisco: Jossey-Bass; 1997. p. 169-86. [4] Grover V, Jeong SR Kettinger WJ, Teng JTC. The implementation of business process reengineering. Journal of Management Information Systems 1995;12(l):109-44. [5] Kettinger WJ, Teng JTC, Guha S. Business process change: a study of methodologies, techniques, and tools. MIS Quarterly 1997;21(l):55-80. [6] Hammer M, Champy J. Reengineering the corporation: a manifesto for business revolution. New York: Harper Business Editions; 1993. [7] Gerrits H. Business modeling based on logistics to support business process re-engineering. In: Glasson BC et al., editors. Business process re-engineering: information systems opportunities and challenges. Amsterdam: Elsevier Science; 1994. p. 279-88. [8] Motwani J, Kumar A, Jiang J. Business process reengineering: a theoretical framework and an integrated model. International Journal of Operations & Production Management 1998;18(9/10):964-77. [9] Valiris G, Glykas M. Critical review of existing BPR methodologies. Business Process Management Journal 1999;5(l):65-86. [10] Sharp A, McDermott P. Workflow modeling: tools for process improvement and application development. Boston: Artech House Publishers; 2001. [11] Brand N, van der Kolk H. Workflow analysis and design. Deventer: Kluwer Bedrijfswetenschappen, 1995 [in Dutch]. [12] Schönberger RJ. World class manufacturing: the lessons of simplicity applied. New York: Free Press; 1986. [13] Stalk GH, Hout TM. Competing against time: how time based competition is re-shaping global markets. New York: Free Press; 1990. [14] Womack JP, Jones DT. Lean thinking: banish waste and create wealth in your corporation. New York: Simon and Schuster; 1996. [15] Goldratt EM. What is this thing called theory of constraints and how should it be implemented?. New York: North River Press; 1990. [16] Goldratt EM, Cox J. The goal. New York: North River Press; 1992. [17] Goldman SL, Nagel RN, Preiss K. Agile competitors and virtual organisations. New York: Van Nostrand Reinhold; 1995. [18] Jones C, Medien N, Merlo C, Robertson M, Shepherdson J. The lean enterprise. BT Technology Journal 1999;17(4): 15-22. [19] Alter S. Information systems: a management perspective. Amsterdam: Addison Wesley; 1999. [20] Berio G, Vernadat F. Enterprise modeling with CIMOSA: functional and organizational aspects. Production Planning & Control 2001;12(2):128-36. [21] Grant D. A wider view of business process reengineering. Communications of the ACM 2002;45(2): 85-90. [22] Gunasekaran A, Nath B. The role of information technology in business process reengineering. International Journal of Production Economics 1997;50(1/2):91-104. [23] Jablonski S, Bussler C. Workflow management: modeling concepts, architecture and implementation. London: International Thomson Computer Press; 1996. [24] Van der Aalst WMP, Berens PJS. Beyond workflow management: product-driven case handling. In: Ellis S et al., editors. International ACM SIGGROUP Conference on Supporting Group Work (GROUP 2001). New York: ACM Press; 2001. p. p42-51. [25] Seidmann A, Sundararajan A. The effects of task and information asymmetry on business process redesign. International Journal of Production Economics 1997;50(2/3): 117-28. [26] Martin J. The best practice of business. London: John Martin Publishing; 1978. [27] Butler P. A strategic framework for health promotion in Darebin. A report to the East Preston and Northcote community health centers by the center for development and innovation in health, center for development and innovation in health, Melbourne, Australia, March 1996. [28] Golovin J. Achieving stretch goals: best practices in manufacturing for the new millennium. New York: Prentice-Hall Editions; 1997. [29] Software Program Managers Network (SPMN). Sixteen critical software practices for performance-based management, 1999. hhtp://www.spmn.com/critical_software _practices.html. [30] O'Neill P, Sohal AS. Business process reengineering: a review of recent literature. Technovation 1999;19(9):571-81. [31] Aldowaisan TA, Gaafar LK. Business process reengineering: an approach for process mapping. Omega 1999;27(5): 515-24. [32] Peppard J, Rowland P. The essence of business process reengineering. New York: Prentice-Hall Editions; 1995. [33] Van der Aalst WMP. Workflow verification: finding control-flow errors using Petri-net-based techniques. In: Van der Aalst WMP et al., editors. Business process management: models, techniques, and empirical studies. Lecture notes in computer science, vol. 1806. Berlin: Springer; 2000. p. pl61-83. [34] Reijers HA, Limam S, Van der Aalst WMP. Product-based workflow design. Journal of Management Information Systems 2003;20(l):229-62. [35] Klein M. 10 principles of reengineering. Executive Excellence 1995;12(2):20. [36] Buzacott JA. Commonalities in reengineered business processes: models and issues. Management Science 1996;42(5):768-82. [37] Oliver RK, Webber MD, Supply-chain management: logistics catches up with strategy. In: Christopher M. editor. Logistics: the strategic issues. London: Chapman & Hall; 1982. p. 63-75. [38] Berg A, Pottjewijd P. Workflow: continuous improvement by integral process management. Schoonhoven: Academic Service; 1997 [in Dutch]. [39] Rupp RO, Russell JR. The golden rules of process redesign. Quality Progress 1994;27(12):85-92. 306 H.A. Reijers, S. Liman Mansar I Omega 33 (2005) 283-306 [40] Castano S, de Antonellis V, Melchiori M. A methodology and tool environment for process analysis and reengineering. Data & Knowledge Engineering 1999;31:253-78. [41] Van der Aalst WMP, Van Hee KM. Workflow management: models, methods, and systems. Cambridge: MIT Press Editions; 2002. [42] Zapf M, Heinzl A. Evaluation of generic process design patterns: an experimental study. In: Van der Aalst WMP et al., editors. Business process management. Lecture notes in computer science, vol. 1806. Berlin: Springer; 2000. p. 83-98. [43] Dewan R, Seidmann A, Zhiping W. Workflow optimization through task redesign in business information processes. In: Proceedings of the Thirty-First Annual Hawaii International Conference on System Sciences. Washington: IEEE 1998; p. 240-53. [44] Reijers HA, Goverde RHJJM. Resource management: a clear-headed approach to ensure efficiency. Workflow Magazine 1998;4(6):26-8. [in Dutch]. [45] Van der Aalst WMP. Reengineering knock-out processes. Decision Support Systems 2000;30(4):451-68. [46] Poyssick G, Hannaford S. Workflow reengineering. Mountain View: Adobe Press Editions; 1996. [47] Van Hee KM, Reijers HA, Verbeek HMW, Zerguini L. On the optimal allocation of resources in stochastic workflow nets. In: Djemame K, Kara M, editors. Proceedings of the Seventeenth UK Performance Engineering Workshop. Leeds: Print Services University of Leeds; 2001. p. 23-34. [48] Gunasekaran A, Marri HB, McGaughey RE, Nebhwani MD. E-commerce and its impact on operations management. International Journal of Production Economics 2002;75: 185-97. [49] Kalakota R, Whinston AB. Electronic commerce: a manager's guide. Reading: Addison-Wesley; 1997. [50] Macintosh R. BPR: alive and well in the public sector. International Journal of Operations & Production Management 2003;23(3):327-44. [51] Foster Jr. ST. Assessing process reengineering impacts through base lining. Benchmarking for Quality Management & Technology 1995;2(3):4-19. [52] Seidmann A, Sundararajan A. Competing in information-intensive services: analyzing the impact of task consolidation and employee empowerment. Journal of Management Information Systems Fall 1997;4(2): 33-56. [53] Harrington HJ. Business process improvement: the breakthrough strategy for total quality, productivity, and competitiveness. New York: McGraw-Hill; 1991. [54] Kettinger WJ, Teng TC. Aligning BPR to strategy: a framework for analysis. Long Range Planning 1998;31(1): 93-107. [55] Maull RS, Tranfield DR, Maull W. Factors characterising the maturity of BPR programmes. International Journal of Operations & Production Management 2003;23(6): 596-624.