PA160: Net-Centric Computing II. Distributed Systems Luděk Matýska Slides by: Tomáš Rebok Faculty of Informatics Masaryk University Spring 2017 Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 1 / 100 Lecture overview Lecture overview Q Distributed Systems 9 Key characteristics • Challenges and Issues • Distributed System Architectures o Inter-process Communication Q Middleware <* Remote Procedure Calls (RPC) • Remote Method Invocation (RMI) • Common Object Request Broker Architecture (CORBA) Q Web Services O Grid Se rvices Q Issues Examples o Scheduling/Load-balancing in Distributed Systems • Fault Tolerance in Distributed Systems Q Conclusion Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 2 / 100 Lecture overview Q Distributed Systems « Key characteristics • Challenges and Issues • Distributed System Architectures o Inter-process Communication 0 Middleware • Remote Procedure Calls (RPC) • Remote Method Invocation (RMI) • Common Object Request Broker Architecture (CORBA) £) Web S ervices Q Grid Services Issues Examples 9 Scheduling/Load-balancing in Distributed Systems • Fault Tolerance in Distributed Systems Q Conclusion Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 3 / 100 Distributed Systems - Definition Distributed System by Coulouris, Dollimore, and Kindberg A system in which hardware and software components located at networked computers communicate and coordinate their actions only by message passing. Distributed System by Tanenbaum and Steen A collection of independent computers that appears to its users as a single coherent system. • the independent/autonomous machines are interconnected by communication networks and equipped with software systems designed to produce an integrated and consistent computing environment Core objective of a distributed system: resource sharing Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 4 / 100 Distributed Systems Key characteristics Distributed Systems - Key characteristics Key Characteristics of Distributed Systems: o Autonomicity - there are several autonomous computational entities, each of which has its own local memory o Heterogeneity - the entities may differ in many ways • computer HW (different data types' representation), network interconnection, operating systems (different APIs), programming languages (different data structures), implementations by different developers, etc. • Concurrency - concurrent (distributed) program execution and resource access • No global clock - programs (distributed components) coordinate their actions by exchanging messages • message communication can be affected by delays, can suffer from variety of failures, and is vulnerable to security attacks • Independent failures - each component of the system can fail independently, leaving the others still running (and possibly not informed about the failure) o How to know/differ the states when a network has failed or became unusually slow? Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 5 / 100 Distributed Systems Challenges and Issues Distributed Systems - Challenges and Issues What do we want from a Distributed System (DS)? • Resource Sharing • Openness • Concurrency • Scalability • Fault Tolerance • Security • Transparency Distributed Systems Challenges and Issues Distributed Systems - Challenges and Issues Resource Sharing and Openness Resource Sharing • main motivating factor for constructing DSs • is should be easy for the users (and applications) to access remote resources, and to share them in a controlled and efficient way • each resource must be managed by a software that provides interfaces which enable the resource to be manipulated by clients • resource = anything you can imagine (e.g., storage facilities, data, files, Web pages, etc.) Openness • whether the system can be extended and re-implemented in various ways and new resource-sharing services can be added and made available for use by a variety of client programs • specification and documentation of key software interfaces must be published o using an Interface Definition Language (IDL) 9 involves HW extensibility as well • i.e., the ability to add hardware from different vendors Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 7 / 100 Distributed Systems Challenges and Issues Distributed Systems - Challenges and Issues Concurrency and Scalability Concurrency • every resource in a DS must be designed to be safe in a concurrent environment • applies not only to servers, but to objects in applications as well • ensured by standard techniques, like semaphores Scalability • a DS is scalable if the cost of adding a user (or resource) is a constant amount in terms of resources that must be added • and is able to utilize the extra hardware/software efficiently • and remains manageable Date Comvuters Web servers 1979, Dec. 188 0 1989, July 130,000 0 1999, July 56,218,000 5,560,866 2003,Jan 171,638,297 35,424,956 Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 8 / 100 Distributed Systems Challenges and Issues Distributed Systems - Challenges and Issues Fault Tolerance and Security Fault Tolerance • a characteristic where a distributed system provides an appropriately handling of errors that occurred in the system • the failures can be detected (sometimes hard or even impossible), masked (made hidden or less severe), or tolerated 9 achieved by deploying two approaches: hardware redundancy and software recovery Security • involves confidentiality integrity authentication, and availability Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 9 / 100 Distributed Systems Challenges and Issues Distributed Systems - Challenges and Issues Transparency I. Transparency • certain aspects of the DS should be made invisible to the user / application programmer • i.e., the system is perceived as a whole rather than a collection of independent components • several forms of transparency. • Access transparency - enables local and remote resources to be accessed using identical operations • Location transparency - enables resources to be accessed without knowledge of their location • Concurrency transparency - enables several processes to operate concurrently using shared resources without interference between them • Replication transparency - enables multiple instances of resources to be used to increase reliability and performance • without knowledge of the replicas by users / application programmers Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 10 / 100 Distributed Systems Challenges and Issues Distributed Systems - Challenges and Issues Transparency II. • forms of transparency cont'd.: • Failure transparency - enables the concealment of faults, allowing users and application programs to complete their tasks despite of a failure of HW/SW components • Mobility/migration transparency - allows the movement of resources and clients within a system without affecting the operation of users or programs • Performance transparency - allows the system to be reconfigured to improve performance as loads vary • Scaling transparency - allows the system and applications to expand in scale without changes to the system structure or application algorithms Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 11 / 100 Distributed Systems Distributed System Architectures Distributed Systems - Architecture Models An architecture model: 9 defines the way in which the components of systems interact with one another, and • defines the way in which the components are mapped onto an underlying network of computers • the overal goal is to ensure that the structure will meet present and possibly future demands • the major concerns are to make system reliable, manageable, adaptable, and cost-effective • principal architecture models: • client-server model - most important and most widely used • a service may be further provided by multiple servers • the servers may in turn be clients for another servers • proxy servers (caches) may be employed to increase availability and performance • peer processes - all the processes play similar roles • based either on structured (Chord, CAN, etc.), unstructured, or hybrid a r r. h i t. er.t. 11 r e^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^_ Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 12 / 100 Distributed Systems Distributed System Architectures Distributed Systems - Architecture Models Client-Server model ^ remote invocation jnvocatiorix invocation result Server J ----- result Key: ^— Process:^ ) Computer: Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 13 / 100 Distributed Systems Distributed System Architectures Distributed Systems - Architecture Models Client-Server model - A Service provided by Multiple Servers Service i-------1 ( Client j Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 14 / 100 Distributed Systems Distributed System Architectures Distributed Systems - Architecture Models Peer processes Peer 2 Distributed Systems Inter-process Communication Distributed Systems - Inter-process Communication (IPC) 9 the processes (components) need to communicate • the communication may be: • synchronous - both send and receive are blocking operations • asynchronous - send is non-blocking and receive can have blocking (more common) and non-blocking variants • the simplest forms of communication: UDP and TCP sockets o binding socket client i> any port message agreed port other ports Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 16 / 100 Distributed Systems Inter-process Communication Distributed Systems - Inter-process Communication UDP and TCP sockets UDP/TCP sockets • provide unreliable/reliable communication services + the complete control over the communication lies in the hands of applications - too primitive to be used in developing a distributed system software • higher-level facilities (marshalling/unmarshalling data, error detection, error recovery, etc.) must be built from scratch by developers on top of the existing socket primitive facilities • force read/write mechanism instead of a procedure call - another problem arises when the software needs to be used in a platform different from where it was developed • the target platform may provide different socket implementation these issues are eliminated by the use of a Middleware Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 17 / 100 Lecture overview Q Distributed Systems • Key characteristics • Challenges and Issues • Distributed System Architectures • Inter-process Communication Q Middleware o Remote Procedure Calls (RPC) • Remote Method Invocation (RMI) • Common Object Request Broker Architecture (CORBA) £) Web S ervices Q Grid Services Issues Examples 9 Scheduling/Load-balancing in Distributed Systems • Fault Tolerance in Distributed Systems Q Conclusion Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 18 / 100 Middleware Middleware • a software layer that provides a programming abstraction as well as masks the heterogeneity of the underlying networks, hardware, operating systems, and programming languages • provides transparency services • represented by processes/objects that interact with each other to implement communication and resource sharing support 9 provides building blocks for the construction of SW components that can work with one another • middleware examples: • Sun RPC (ONC RPC) • DCE RPC • MS COM/DCOM • Java RMI • CORBA • etc. Machine A Machine B Distributed applications Network OS services Kernel Luděk Matýska (Fl MU) 2. Distributed Systems Middleware Services Network OS services Kernel Machine C Network OS services Kernel Network Spring 2017 19 / 100 Middleware Basic Services Basic services provided: • Directory services - services required to locate application services and resources, and route messages • ^ service discovery 9 Data encoding services - uniform data representation services for dealing with incompatibility problems on remote systems • e.g., Sun XDR, ISO's ASN.l, CORBA's CDR, XML, etc. • data marshalling/unmarshalling • Security services - provide inter-application client-server security mechanisms • Time services - provide a universal format for representing time on different platforms (possibly located in various time zones) in order to keep synchronisation among application processes • Transaction services - provide transaction semantics to support commit, rollback, and recovery mechanisms Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 20 / 100 Middleware Basic Services - A Need for Data Encoding Services Data encoding services are required, because remote machines may have: • different byte ordering • different sizes of integers and other types • different floating point representations o different character sets • alignment requirements main() { unsigned int n; char *a = (char *)&n; n = 0x11223344; printf("%02x, %02x, %02x, %02x\n", a[0] , a[l] , a[2] , a[3]> ; _}_ Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 21 / 100 Output on a Pentium: 44, 33, 22, 11 Output on a G4: 11, 22, 33, 44 Middleware Remote Procedure Calls (RPC) Remote Procedure Calls (RPC) Remote Procedure Calls (RPC) • very simple idea similar to a well-known procedure call mechanism • a client sends a request and blocks until a remote server sends a response • the goal is to allow distributed programs to be written in the same style as conventional programs for centralised computer systems • while being transparent - the programmer need not be aware that the called procedure is executing on a local or a remote computer • the idea: • the remote procedure is represented as a stub on the client side 9 behaves like a local procedure, but rather than placing the parameters into registers, it packs them into a message, issues a send primitive, and blocks itself waiting for a reply • the server passes the arrived message to a server stub (known as skeleton as well) • the skeleton unpacks the parameters and calls the procedure in a conventional manner • the results are returned to the skeleton, which packs them into a message directed to the client stub Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 22 / 100 Middleware Remote Procedure Calls (RPC) Remote Procedure Calls (RPC) Calling Procedure parameters result Called Procedure Local Procedure Call parameters result result i parameters Skeleton request message y i ^ reply message ^ i Network Luděk Matýska (Fl MU) Remote Procedure Call 2. Distributed Systems Spring 2017 23 / 100 Middleware Remote Procedure Calls (RPC) Remote Procedure Calls (RPC) The remote procedure call in detail Client program calLrpcQ Client Packet parameters send() receiveO Unpack parameters receive() Unpacket parameters Unpack parameters send() Server Server program call return Communication network 2. Distributed Systems Spring 2017 24 / 100 Middleware Remote Procedure Calls (RPC) Remote Procedure Calls (RPC) Components Request server process Reply client program server stub procedure Communication Communication module module dispatcher service procedure 9 client program, client stub d communication modules 9 server stub, service procedure • dispatcher - selects one of the server stub procedures according to the procedure identifier in the request message Sun RPC: the procedures are identified by: o program number - can be obtained from a central authority to allow every program to have its own unique number • procedure number - the identifier of the particular procedure within the program • version number - changes when a procedure signature changes Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 25 / 100 Middleware Remote Procedure Calls (RPC) Remote Procedure Calls (RPC) Location Services - Portmapper Clients need to know the port number of a service provided by the server =4> Portmapper • a server registers its program#, version#, and port# to the local portmapper • a client finds out the port# by sending a request • the portmapper listens on a well-known port (111) • the particular procedure required is identified in the subsequent procedure call o well-known port 0 port mapper [prog#, v#, port #] client 0 server Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 26/ Middleware Remote Procedure Calls (RPC) Remote Procedure Calls (RPC) Parameters passing How to pass parameters to remote procedures? • pass by value - easy: just copy data to the network message a pass by reference - makes no sense without shared memory Pass by reference: the steps O copy referenced items (marshalled) to a message buffer O ship them over, unmarshal data at server O pass local pointer to server stub function O send new values back • to support complex structures: • copy the structure into pointerless representation • transmit • reconstruct the structure with local pointers on the server Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 Middleware Remote Procedure Calls (RPC) Remote Procedure Calls (RPC) Parameters passing - eXternal Data Representation (XDR) Sun RPC: to avoid compatibility problems, the eXternal Data Representation (XDR) is used • XDR primitive functions examples: • xdr_int () , xdr_char(), xdr_u_short () , xdr_bool(), xdr_long(), xdr_u_int () , xdr_wrapstring() , xdr_short(), xdr_enum(), xdr_void() • XDR aggregation functions: • xdr_array(), xdr_string(), xdr_union(), xdr.vector (), xdr_opaque () only a single input parameter is allowed in a procedure call • procedures requiring multiple parameters must include them as components of a single structure Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 28 / 100 Middleware Remote Procedure Calls (RPC) Remote Procedure Calls (RPC) When Things Go Wrong I. • local procedure calls do not fail • if they core dump, entire process dies • there are more opportunities for errors with RPC • server could generate an error • problems in network (lost/delayed requests/replies) • server crash • client might crash while server is still executing code for it o transparency breaks here • applications should be prepared to deal with RPC failures Luděk Matýska (Fl MU) 2. Distributed Systems Sprin Middleware Remote Procedure Calls (RPC) Remote Procedure Calls (RPC) When Things Go Wrong II. Semantics of local procedure calls: exactly once a difficult to achieve with RPC Four remote calls semantics available in RPC: • at-least-once semantic • client keeps trying sending the message until a reply has been received 9 failure is assumed after n re-sends • guarantees that the call has been made "at least once", but possibly multiple times • ideal for idempotent operations 9 at-most-once semantic • client gives up immediately and reports back a failure • guarantees that the call has been made "at most once", but possibly none at all • exactly-once semantic • the most desirable, but the most difficult to implement • maybe semantic • no message delivery guarantees are provided at all • (easy to implement) Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 30 / 100 Remote Procedure Calls (RPC) When Things Go Wrong III. Client Server Client Server Timeout Timeout Timeout Timeout request suits of the execution of the requests could be different request a acknowledgment b Table of current request Figure: Message-passing semantics, (a) at-least-once; (b) exactly-once. Middleware Remote Procedure Calls (RPC) Remote Procedure Calls (RPC) When Things Go Wrong IV. - Complications O it is necessary to understand the application 9 idempotent functions - in the case of a failure, the message may be retransmitted and re-run without a harm • non-idempotent functions - has side-effects =^> the retransmission has to be controlled by the server • the duplicity request (retransmission) has to be detected • once detected, the server procedure is NOT re-run; just the results are resent (if available in a server cache) O in the case of a server crash, the order of execution vs. crash matters REQ Server No REP REQ No REP Server O in the case of a client crash, the procedure keeps running on the server • consumes resources (e.g., CPU time), possesess resources (e.g., locked files), etc. • may be overcome by employing soft-state principles • keep-alive messages Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 32 / 100 Middleware Remote Procedure Calls (RPC) Remote Procedure Calls (RPC) Code Generation I. • RPC drawbacks: • complex API, not easy to debug • the use of XDR is difficult • but, it's often used in a similar way • =4> the server/client code can be automatically generated • assumes well-defined interfaces (IDL) • the application programmer has to supply the following: 9 interface definition file - defines the interfaces (data structures, procedure names, and parameters) of the remote procedures that are offered by the server • client program - defines the user interfaces, the calls to the remote procedures of the server, and the client side processing functions • server program - implements the calls offered by the server • compilers: 9 rpcgen for C/C++, jrpcgen for Java Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 33 / Middleware Remote Procedure Calls (RPC) Remote Procedure Calls (RPC) Code Generation II. Client Program Interface Definition i Server Program Id] Compiler i Client Stub i Interface Header Server Stub Middleware Remote Method Invocation (RMI) Remote Method Invocation (RMI) o as the popularity of object technology increased, techniques were developed to allow calls to remote objects instead of remote procedures only Remote Method Invocation (RMI) • essentially the same as the RPC, except that it operates on objects instead of applications/procedures • the RMI model represents a distributed object application • it allows an object inside a JVM (a client) to invoke a method on an object running on a remote JVM (a server) and have the results returned to the client • the server application creates an object and makes it accesible remotely (i.e., registers it) • the client application receives a reference to the object on the server and invokes methods on it • the reference is obtained through looking up in the registry • important: a method invocation on a remote object has the same syntax as a method invocation on a local object Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 35 / 100 Middleware Remote Method Invocation (RMI) Remote Method Invocation (RMI) Architecture I. The interface, through which the client and server interact, is (similarly to RPC) provided by stubs and skeletons: Client JVM parameters request message Server JVM Called Object parameters result Skeleton reply message Network Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 36 / 100 Middleware Remote Method Invocation (RMI) Remote Method Invocation (RMI) Architecture II. Two fundamental concepts as the heart of distributed object model: • remote object reference- an identifier that can be used throughout distributed system to refer to a particular unique remote object • its construction must ensure its uniqueness 32 bits 32 bits 32 bits 32 bits Internet address port number time object number interface of remote object • remote interface - specifies, which methods of the particular object can be invoked remotely Middleware Remote Method Invocation (RMI) Remote Method Invocation (RMI) Architecture III. the remote objects can be accessed concurrently • the encapsulation allows objects to provide methods for protecting themselves against incorrect accesses • e.g., synchronization primitives (condition variables, semaphores, etc.) RMI transaction semantics similar to the RPC ones • at-least-once, at-most-once, exactly-once, and maybe semantics data encoding services: • stubs use Object Serialization to marshal the arguments • object arguments' values are rendered into a stream of bytes that can be transmitted over a network the arguments must be primitive types or objects that implement Serializable interface • parameters passing: • local objects passed by value • remote objects passed by reference Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 38 / 100 Middleware Common Object Request Broker Architecture (CORBA) Common Object Request Broker Architecture (CORBA) Common Object Request Broker Architecture (CORBA) • an industry standard developed by the OMG (Object Management Group - a consortium of more than 700 companies) to aid in distributed objects programming • OMG was established in 1988 • initial CORBA specification came out in 1992 9 but significant revisions have taken place from that time • provides a platform-independent and language-independent architecture (framework) for writing distributed, object-oriented applications • i.e., application programs can communicate without restrictions to: • programming languages, hardware platforms, software platforms, networks they communicate over • but CORBA is just a specification for creating and using distributed objects; it is not a piece of software or a programming language • several implementations of the CORBA standard exist (e.g., IBM's SOM and DSOM architectures) Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 39 / Middleware Common Object Request Broker Architecture (CORBA) CORBA Components CORBA is composed of five major components: • Object Request Broker (ORB) • Interface Definition Language (IDL) • Dynamic Invocation Interface (Dll) • Interface Repositories (IR) • Object Adapters (OA) Middleware Common Object Request Broker Architecture (CORBA) CORBA Components Object Request Broker (ORB) I. Object Request Broker (ORB) • the heart of CORBA • introduced as a part of OMG's Object Management Architecture (OMA), which the CORBA is based on o a distributed service that implements all the requests to the remote object(s) • it locates the remote object on the network, communicates the request to the object, waits for the results and (when available) communicates those results back to the client o implements location transparency • exactly the same request mechanism is used regardless of where the object is located • might be in the same process with the client or across the planet • implements programming language independence 9 the client issuing a request can be written in a different programming language from the implementation of the CORBA object • both the client and the object implementation are isolated from the ORB by an IDL interface • Internet Inter-ORB Protocol (HOP) - the standard communication protocol between ORBs Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 41 / 100 Middleware Common Object Request Broker Architecture (CORBA) CORBA Components Object Request Broker (ORB) II Client Object Implementation ;IDL Client 1DL IDL ORB Object Implementation IDL" ORB ,_ A NETWORK Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 42 / 100 Middleware Common Object Request Broker Architecture (CORBA) CORBA Components Interface Definition Language (IDL) I. Interface Definition Language (IDL) • as with RMI, CORBA objects have to be specified with interfaces • interface ~ a contract between the client (code using a object) and the server (code implementing the object) • indicates a set of operations the object supports and how they should be invoked (but NOT how they are implemented) • defines modules, interfaces, types, attributes, exceptions, and method signatures • uses same lexical rules as C++ • with additional keywords to support distribution (e.g. interface, any, attribute, in, out, inout, readonly, raises) • defines language bindings for many different programming languages (e.g., C/C++, Java, etc.) • via language mappings, the IDL translates to different constructs in the different implementation languages • it allows an object implementor to choose the appropriate programming language for the object, and 9 it allows the developer of the client to choose the appropriate and possibly ^^^^different Droffj^mmineManefuaere for th^cliem^^^^^^^^^^^^^^^^^^^^ Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 43 / Middleware Common Object Request Broker Architecture (CORBA) CORBA Components Interface Definition Language (IDL) II. Interface Definition Language (IDL) example: module StockObjects { struct Quote { string symbol; long at_time; double price; long volume; >; exception UnknownO; interface Stock { // Returns the current stock quote. Quote get_quote() raises(Unknown); // Sets the current stock quote. void set_quote(in Quote stock_quote); // // Provides the stock description, e.g. company name, readonly attribute string description; >; interface StockFactory { Stock create_stock(in string symbol, in string description); >; >; Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 44 / 100 Middleware Common Object Request Broker Architecture (CORBA) CORBA Components Interface Definition Language (IDL) III. - Stubs and Skeletons IDL compiler automatically compiles the IDL into client stubs and object skeletons: client program language mapping operation signatures object implementation method language mapping entry points Location Service "ransport Layer Stubs and Skeletons are automatically generated from IDL interfaces' Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 45 / 100 Middleware Common Object Request Broker Architecture (CORBA) CORBA Components Interface Definition Language (IDL) IV. - Development Process Using IDL IDL Definition Client Implementation Client Program Source IDL Skeleton Source Object Imp lementation Object Implementation Source Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 46 / Middleware Common Object Request Broker Architecture (CORBA) CORBA Components Dynamic Invocation Interface (DM) & Dynamic Skeleton Interface (DSI) Dynamic Invocation Interface (DM) • CORBA supports both the dynamic and the static invocation interfaces • static invocation interfaces are determined at compile time • dynamic interfaces allow client applications to use server objects without knowing the type of those objects at compile time • DM - an API which allows dynamic construction of CORBA object invocations Dynamic Skeleton Interface (DSI) 9 DSI is the server side's analogue to the client side's DM • allows an ORB to deliver requests to an object implementation that does not have compile-time knowledge of the type of the object it is implementing Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 47 / 100 Middleware Common Object Request Broker Architecture (CORBA) CORBA Components Interface Repository (IR) Interface Repository (IR) <* a runtime component used to dynamically obtain information on IDL types (e.g. object interfaces) • using the IR, a client should be able to locate an object that is unknown at compile time, find information about its interface, and build a request to be forwarded through the ORB • this kind of information is necessary when a client wants to use the DM to construct requests dynamically Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 48 / 100 Middleware Common Object Request Broker Architecture (CORBA) CORBA Components Object Adapters (OAs) Object Adapters (OAs) • the interface between the ORB and the server process • OAs listen for client connections/requests and map the inbound requests to the desired target object instance • provide an API that object implementations use for: • generation and interpretation of object references 9 method invocation 9 security of interactions • object and implementation activation and deactivation • mapping object references to the corresponding object implementations • registration of implementations • two basic kinds of OAs: • basic object adapter (BOA) - leaves many features unsupported, requiring proprietary extensions • portable object adapter (POA) - intended to support multiple ORB implementations (of different vendors), allow persistent objects, etc. Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 49 / 100 Middleware Common Object Request Broker Architecture (CORBA) CORBA Object & Object Reference CORBA Objects are fully encapsulated • accessed through well-defined interfaces only • interfaces & implementations are totally separate • for one interface, multiple implementations possible • one implementation may be supporting multiple interfaces IDL Interface CORBA Object Reference is the distributed computing equivalent of a pointer • CORBA defines the Interoperable Object Reference (IOR) • an IOR contains a fixed object key, containing: • the object's fully qualified interface name (repository ID) • user-defined data for the instance identifier • can also contain transient information: • the host and port of its server, metadata about the server's ORB (for potential optimizations), etc. • =4> the IOR uniquely identifies one object instance Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 50 / 100 CORBA Services CORBA Services (COS) a the OMG has defined a set of Common Object Services to support the integration and interoperation of distributed objects = frequently used components needed for building robust applications typically supplied by vendors OMG defines interfaces to services to ensure interoperability Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 51 / 100 Middleware Common Object Request Broker Architecture (CORBA) CORBA Services Popular Services Example Service Description Object life cycle Defines how CORBA objects are created, removed, moved, and copied Naming Defines how CORBA objects can have friendly symbolic names Events Decouples the communication between distributed objects Relationships Provides arbitrary typed n-ary relationships between CORBA objects Externalization Coordinates the transformation of CORBA objects to and from external media Transactions Coordinates atomic access to CORBA objects Concurrency Control Provides a locking service for CORBA objects in order to ensure serializable access Property Supports the association of name-value pairs with CORBA objects Trader Supports the finding of CORBA objects based on properties describing the service offered by the object Query Supports queries on objects Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 52 / Middleware Common Object Request Broker Architecture (CORBA) CORBA Architecture Summary interface repository idl compiler impl ementat i on repository in args ^ client (OBJ) operationO out atTgs + return value ^-O í ~\ r -\ idl STUBS V J v. J orb interface object (servant) idl skeleton dsi object adapter Giop/nop o standard interface ORB CORE J O standard language mapping (~) orb-specific interface (~) standard protocol Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 53 / 100 Lecture overview Q Distributed Systems • Key characteristics • Challenges and Issues • Distributed System Architectures • Inter-process Communication 0 Middleware • Remote Procedure Calls (RPC) • Remote Method Invocation (RMI) • Common Object Request Broker Architecture (CORBA) Q Web Services Q Grid Services Issues Examples 9 Scheduling/Load-balancing in Distributed Systems • Fault Tolerance in Distributed Systems Q Conclusion Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 54 / 100 Lecture overview 9 Distributed Systems • Key characteristics • Challenges and Issues • Distributed System Architectures • Inter-process Communication Q Middleware • Remote Procedure Calls (RPC) • Remote Method Invocation (RMI) • Common Object Request Broker Architecture (CORBA) Q Web Services O Grid Se rvices Q Issues Examples • Scheduling/Load-balancing in Distributed Systems • Fault Tolerance in Distributed Systems Q Conclusion Lecture overview Q Distributed Systems • Key characteristics • Challenges and Issues • Distributed System Architectures • Inter-process Communication 0 Middleware • Remote Procedure Calls (RPC) • Remote Method Invocation (RMI) • Common Object Request Broker Architecture (CORBA) £) Web S ervices Q Grid Services Q Issues Examples o Scheduling/Load-balancing in Distributed Systems • Fault Tolerance in Distributed Systems Q Conclusion Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 56 / 100 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in Distributed Systems I. For concurrent execution of interacting processes: • communication and synchronization between processes are two essential system components Before the processes can execute, they need to be: o scheduled and • allocated with resources Why scheduling in distributed systems is of special interest? • because of the issues that are different from those in traditional multiprocessor systems: • the communication overhead is significant • the effect of underlying architecture cannot be ignored 9 the dynamic behaviour of the system must be addressed • local scheduling (on each node) + global scheduling Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in Distributed Systems II. Scheduling/Load-balancing in Distributed Systems • let's have a pool of jobs • there are some inter-dependencies among them • let's have a set of nodes (processors), which are able to reciprocally communicate Load-balancing The term load-balancing means assigning the jobs to the processors in the way which minimizes the time/communication overhead necessary to compute them. • load-balancing - divides the jobs among the processors • scheduling - defines, in which order the jobs have to be executed (on each processor) • load-balancing and planning are tightly-coupled (usually considered as synonyms in DSs) • objectives: • enhance overall system performance metric • process completion time and processor utilization • location and performance transparency Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 58 / 100 Issues Examples Scheduling/Load-balancing in Distributed Systems the scheduling/load-balancing task can be represented using graph theory: • the pool of N jobs with dependencies can be described as a graph G{V, U), where • the nodes represent the jobs (processes) • the edges represent the dependencies among the jobs/processes (e.g., an edge from / to j requires that the process / has to complete before j can start executing) the graph G has to be splitted into p parts, so that: • N = A/i U N2 U • • • U Np • which satisfy the condition, that |A//| ^ L^-, where • | N;| is the number of jobs assigned to the processor /, and • p is the number of processors, and • the number/cost of the edges connecting the parts is minimal • the objectives: • uniform jobs' load-balancing • minimizing the communication (the minimal number of edges among the parts) • the splitting problem is NP-complete Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 59 / 100 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in Distributed Systems An illustration (a) Precedence process model (b) Communication process model (c) Disjoint process model Figure: An illustration of splitting 4 jobs onto 2 processors. 2. Distributed Systems Spring 2017 60 / 100 Scheduling/Load-balancing in Distributed Systems IV. • the "proper" approach to the scheduling/load-balancing problem depends on the following criteria: • jobs' cost • dependencies among the jobs • jobs' locality 2. Distributed Systems Spring 2017 61 / 100 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in Distributed Systems IV Jobs' Cost • the job's cost may be known: • before the whole problem set's execution • during problem's execution, but before the particular job's execution • just after the particular job finishes • cost's variability - all the jobs may have (more or less) the same cost or the costs may differ 9 the problem classes based on jobs' cost: • all the jobs have the same cost: easy • the costs are variable, but, known: more complex • the costs are unknown in advance: the most complex Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 62 / 100 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in Distributed Systems IV Dependencies Among the Jobs • is the order of jobs' execution important? • the dependencies among the jobs may be known: • before the whole problem set's execution • during problem's execution, but before the particular job's execution • are fully dynamic 9 the problem classes based on jobs' dependencies: • the jobs are fully independent on each other: easy • the dependencies are known or predictable: more complex • flooding • in-trees, out-trees (balanced or unbalanced) • generic oriented trees (DAG) • the dependencies dynamically change: the most complex • e.g., searching/lookup problems Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 63 / 100 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in Distributed Systems IV. Locality • communicate all the jobs in the same/similar way? • is it suitable/necessary to execute some jobs "close" to each other? • when the job's communication dependencies are known? • the problem classes based on jobs' locality: • the jobs do not communicate (at most during initialization): easy • the communications are known/predictable: more complex • regular (e.g., a grid) or irregular • the communications are unknown in advance: the most complex • e.g., a discrete events' simulation Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 64 / Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in DSs - Solving Methods • in general, the "proper" solving method depends on the time, when the particular information is known • basic solving algorithms' classes: • static - offline algorithms • semi-static - hybrid approaches • dynamic - online algorithms • some (but not all) variants: 9 static load-balancing • semi-static load-balancing • self-scheduling • distributed queues • DAG planning Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 65 / Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in DSs - Solving Methods Semi-static load-balancing Semi-static load-balancing • suitable for problem sets with slow changes in parameters, and with locality importance • iterative approach • uses static algorithm • the result (from the static algorithm) is used for several steps (slight unbalance is accepted) • after the steps, the problem set is recalculated with the static algorithm • often used for: • particle simulation • calculations of slowly-changing grids (but in a different sense than in the previous lectures) again Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 66 / 100 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in DSs - Solving Methods Self-scheduling I. Self-scheduling • a centralized pool of jobs • idle processors pick the jobs from the pool 9 new (sub)jobs are added to the pool + ease of implementation • suitable for: • a set of independent jobs • jobs with unknown costs 9 jobs where locality does not matter • unsuitable for too small jobs - due to the communication overhea • ^> coupling jobs into bulks • fixed size • controlled coupling 9 tapering • weighted distribution Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in DSs - Solving Methods Self-scheduling II. - Fixed size & Controlled coupling Fixed size • typical offline algorithm • requires much information (number and cost of each job, ...) • it is possible to find the optimal solution • theoretically important, not suitable for practical solutions Controlled coupling 9 uses bigger bulks in the beginning of the execution, smaller bulks in the end of the execution • lower overhead in the beginning, finer coupling in the end • the bulk's size is computed as: K, — -£\ where: • /?/... the number of remaining jobs • p .. . the number of processors Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 68 / 100 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in DSs - Solving Methods Self-scheduling II. - Tapering & Weighted distribution Tapering • analogical to the Controlled coupling, but the bulks' size is further a function of jobs' variation • uses historical information • low variance bigger bulks • high variance smaller bulks Weighted distribution <* considers the nodes' computational power • suitable for heterogenous systems • uses historical information as well Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 69 / 100 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in DSs - Solving Methods Distributed Queues Distributed Queues • « self-scheduling for distributed memory 9 instead of a centralized pool, a queue on each node is used (per-processor queues) • suitable for: • distributed systems, where the locality does not matter 9 for both static and dynamic dependencies • for unknown costs • an example: diffuse approach • in every step, the cost of jobs remaining on each processor is computed • processors exchange this information and perform the balancing • locality must not be important Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 70 / 100 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in DSs - Solving Methods Centralised Pool vs. Distributed Queues 2X Figure: Centralised Pool (left) vs. Distributed Queues (right). 2. Distributed Systems Spring 2017 71 / 100 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in DSs - Solving Methods DAG Planning DAG Planning a another graph model • the nodes represent the jobs (possibly weighted) • the edges represent the dependencies and/or the communication (m be also weighted) • e.g., suitable for digital signal processing a basic strategy - divide the DAG so that the communication and th processors' occupation (time) is minimized • NP-complete problem • takes the dependencies among the jobs into account Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in DSs - Design Issues I. When the scheduling/load-balancing is necessary? • for middle-loaded systems • lowly-loaded systems - rarely job waiting (there's always an idle processor) • highly-loaded systems - little benefit (the load-balancing cannot help) What is the performance metric? 9 mean response time What is the measure of load? • must be easy to measure • must reflect performance improvement • example: queue lengths at CPU, CPU utilization Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 73 / 100 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in DSs - Design Issues I. Components Types of policies: • static (decisions hardwired into system), dynamic (uses load information), adaptive (policy varies according to load) Policies: • Transfer policy: when to transfer a process? • threshold-based policies are common and easy • Selection policy: which process to transfer? • prefer new processes • transfer cost should be small compared to execution cost • =4> select processes with long execution times • Location policy: where to transfer the process? • polling, random, nearest neighbor, etc. • Information policy: when and from where? • demand driven (only a sender/receiver may ask for), time-driven (periodic), state-change-driven (send update if load changes) Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in DSs - Design Issues II Sender-initiated Policy Sender-initiated Policy • Transfer policy n< r new -process n> t ▼ try to xfer CPU Selection policy: newly arrived process Location policy: three variations • Random - may generate lots of transfers • =^> necessary to limit max transfers • Threshold - probe n nodes sequentially 9 transfer to the first node below the threshold, if none, keep job • Shortest - poll Np nodes in parallel • choose least loaded node below T • if none, keep the job Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in DSs - Design Issues II. Receiver-initiated Policy Receiver-initiated Policy • Transfer policy: if departing process causes load < T, find a process from elsewhere • Selection policy: newly arrived or partially executed process • Location policy: • Threshold - probe up to Np other nodes sequentially • transfer from first one above the threshold; if none, do nothing • Shortest - poll n nodes in parallel • choose the node with heaviest load above T Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 76 / Scheduling/Load-balancing in DSs - Design Issues II. Symmetric Policy Symmetric Policy • combines previous two policies without change • nodes act as both senders and receivers * uses average load as the threshold sender initiated -- avg load --■ recvr initiated Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in DSs - Case study V-System (Stanford) V-System (Stanford) • state-change driven information policy • significant change in CPU/memory utilization is broadcast to all nodes • M least loaded nodes are receivers, others are senders • sender-initiated with new job selection policy • Location policy: • probe random receiver • if still receiver (below the threshold), transfer the job • otherwise try another other Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 78 / 100 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in DSs - Case study Sprite (Berkeley) I. Sprite (Berkeley) • Centralized information policy: coordinator keeps info • state-change driven information policy • Receiver: workstation with no keyboard/mouse activity for the defined time period (30 seconds) and below the limit (active processes < number of processors) • Selection policy: manually done by user =4> workstation becomes sender • Location policy: sender queries coordinator • the workstation with the foreign process becomes sender if user becomes active Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 79 / 100 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in DSs - Case study Sprite (Berkeley) II. Sprite (Berkeley) cont'd. • Sprite process migration: • facilitated by the Sprite file system • state transfer: o swap everything out • send page tables and file descriptors to the receiver • create/establish the process on the receiver and load the necessary pages • pass the control • the only problem: communication-dependencies • solution: redirect the communication from the workstation to the receiver Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 Issues Examples Scheduling/Load-balancing in Distributed Systems Scheduling/Load-balancing in DSs - Code and Process Migration Code and Process Migration • key reasons: performance and flexibility • flexibility: • dynamic configuration of distributed system • clients don't need preinstalled software (download on demand) • process migration (strong mobility) • process = code + data + stack • examples: Condor, DQS • code migration (weak mobility) • transferred program always starts from its initial state 9 migration in heterogeneous systems: 9 only weak mobility is supported in common systems (recompile code, no run time information) • the virtual machines may be used: interprets (scripts) or intermediate code (Java) Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 81 / Issues Examples Fault Tolerance in Distributed Systems Fault Tolerance in Distributed Systems I. • single machine systems • failures are all or nothing • OS crash, disk failures, etc. • distributed systems: multiple independent nodes • partial failures are also possible (some nodes fail) • probability of failure grows with number of independent components (nodes) in the system • fault tolerance: system should provide services despite faults • transient faults 9 intermittent faults • permanent faults Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 82 / 100 Issues Examples Fault Tolerance in Distributed Systems Fault Tolerance in Distributed Systems I Failure Types Type of failure Description Crash failure A server halts, but is working correctly until it halts Omission failure Receive omission Send omission A server fails to respond to incoming requests A server fails to receive incoming messages A server fails to send messages Timing failure A server's response lies outside the specified time interval Response failure Value failure State transition failure The server's response is incorrect The value of the response is wrong The server deviates from the correct flow of control Arbitrary failure A server may produce arbitrary responses at arbitrary times Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 83 / 100 Issues Examples Fault Tolerance in Distributed Systems Fault Tolerance in Distributed Systems II. • handling faulty processes: through redundancy • organize several processes into a group • all processes perform the same computation • all messages are sent to all the members of the particular group • majority needs to agree on results of a computation • ideally, multiple independent implementations of the application desirable (to prevent identical bugs) o use process groups to organize such processes Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 Issues Examples Fault Tolerance in Distributed Systems Flat group V- Hierarchical group V- Coordinator Worker (a) (b) Figure: Flat Groups vs. Hierarchical Groups. Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 85 / 100 Issues Examples Fault Tolerance in Distributed Systems Fault Tolerance in DSs - Agreement in Faulty Systems a How should processes agree on results of a computation? • K-fault tolerant: system can survive k faults and yet function • assume processes fail silently • ^> need (k + 1) redundancy to tolerant k faults • Byzantine failures: processes run even if sick • produce erroneous, random or malicious replies • byzantine failures are most difficult to deal with Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 Issues Examples Fault Tolerance in Distributed Systems Fault Tolerance in DSs - Agreement in Faulty Systems Byzantine Faults Byzantine Generals Problem: • four generals lead their divisions of an army • the divisions camp on the mountains on the four sides of an enemy-occupied valley • the divisions can only communicate via messengers • messengers are totally reliable, but may need an arbitrary amount of time to cross the valley • they may even be captured and never arrive o if the actions taken by each division is not consistent with that of the others, the army will be defeated • we need a scheme for the generals to agree on a common plan of action (attack or retreat) • even if some of the generals are traitors who will do anything to prevent loyal generals from reaching the agreement • the problem is nontrivial even if messengers are totally reliable • with unreliable messengers, the problem is very complex • Fischer, Lynch, Paterson: in asynchronous systems, it is impossible to reach a consensus in a finite amount of time Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 87 / 100 Issues Examples Fault Tolerance in Distributed Systems Fault Tolerance in DSs - Agreement in Faulty Systems Formal Definition I. Formal definition of the agreement problem in DSs: • let's have a set of distributed processes with initial states g 0,1 • the goal: all the processes have to agree on the same value • additional requirement: it must be possible to agree on both 0 or 1 states • basic assumptions: • system is asynchronous • no bounds on processes' execution delays exist • no bounds on messages' delivery delay exist • there are no synchronized clocks • no communication failures - every process can communicate with its neighbors • processes fail by crashing - we do not consider byzantine failures Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 88 / Issues Examples Fault Tolerance in Distributed Systems Fault Tolerance in DSs - Agreement in Faulty Systems Formal Definition II. Formal definition of the agreement problem in DSs: cont'd. • implications: there is no deterministic algorithm which resolves the consensus problem in an asynchronous system with processes, which may fail because it is impossible to distinguish the cases: • a process does not react, because it has failed • a process does not react, because it is slow practically overcomed by establishing timeouts and by ignoring/killing too slow processes • timeouts used in so-called Failure Detectors (see later) Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 89 / 100 Issues Examples Fault Tolerance in Distributed Systems Fault Tolerance in DSs - Agreement in Faulty Systems Fault-tolerant Broadcast Fault-tolerant Broadcast: • // there was a proper type of fault-tolerant broadcast, the agreement problem would be solvable • various types of broadcasts: • reliable broadcast • FIFO broadcast • casual broadcast • atomic broadcast - the broadcast, which would solve the agreement problem in asynchronous systems Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 90 / Issues Examples Fault Tolerance in Distributed Systems Fault Tolerance in DSs - Agreement in Faulty Systems Fault-tolerant Broadcast - Reliable Broadcast Reliable Broadcast: • basic features: • Validity - if a correct process broadcasts m, then it eventually delivers m 9 Agreement - if a correct process delivers m, then all correct processes eventually deliver m • (Uniform) Integrity - m is delivered by a process at most once, and only if it was previously broadcasted o possible to implement using send/receive primitives: • the process p sending the broadcast message marks the message by its identifier and sequence number • and sends it to all its neighbors • once a message is received: • if the message has not been previously received (based in sender's ID and sequence number), the message is delivered • if the particular process is not message's sender, it delivers it to all its neighbors Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 91 / 100 Issues Examples Fault Tolerance in Distributed Systems Fault Tolerance in DSs - Agreement in Faulty Systems Fault-tolerant Broadcast - FIFO Broadcast FIFO Broadcast: 9 the reliable broadcast cannot assure the messages' ordering • it is possible to receive a subsequent message (from the sender's view) before the previous one is received • FIFO broadcast: the messages from a single sender have to be delivered in the same order as they were sent • FIFO broadcast = Reliable broadcast + FIFO ordering • if a process p broadcasts a message m before it broadcasts a message at/, then no correct process delivers m' unless it has previously delivered m 9 broadcastp(m) —>► broadcastp(m') ^> deliverq(m) —>► deliverq(m') • a simple extension of the reliable broadcast Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 92 / 100 Issues Examples Fault Tolerance in Distributed Systems Fault Tolerance in DSs - Agreement in Faulty Systems Fault-tolerant Broadcast - Casual Broadcast Casual Broadcast: • the FIFO broadcast is still not sufficient: it is possible to receive a message from a third party, which is a reaction to a particular message before receiving that particular message • ^> Casual broadcast • Casual broadcast = Reliable broadcast + casual ordering • if the broadcast of a message m happens before the broadcast of a message m', then no correct process delivers m' unless it has previously delivered m • broadcastp(m) —>► broadcastq(m') =>► deliverr(m) —>► deliverr(m') 9 can be implemented as an extension of the FIFO broadcast Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 93 / 100 Issues Examples Fault Tolerance in Distributed Systems Fault Tolerance in DSs - Agreement in Faulty Systems Fault-tolerant Broadcast - Atomic Broadcast Atomic Broadcast: • even the casual broadcast is still not sufficient: sometimes, it is necessary to guarantee the proper in-order delivery of all the replicas • two bank offices: one of them receives the information about adding an interest before adding a particular amount of money to the account, the second one receives these messages contrariwise • =^> inconsistency • ^> Atomic broadcast • Atomic broadcast = Reliable broadcast + total ordering • if correct processes p and q both deliver messages m, m;, then p delivers m before m' if and only if q delivers m before m' • deliverp(m) —>* deliverp(m/) deliverq(m) —>* deliverq(m') • does not exist in asynchronous systems Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 94 / 100 Issues Examples Fault Tolerance in Distributed Systems Fault Tolerance in DSs - Agreement in Faulty Systems Fault-tolerant Broadcast - Timed Reliable Broadcast Timed Reliable Broadcast: • a way to practical solution • introduces an upper limit (time), before which every message has to be delivered • Timed Reliable broadcast = Reliable broadcast + timeliness • there is a known constant A such that if a message is broadcasted at real-time t, then no correct (any) process delivers m after real-time t + A 9 feasible in asynchronous systems • A kind of "approximation" of atomic broadcast Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 95 / Issues Examples Fault Tolerance in Distributed Systems Fault Tolerance in DSs - Agreement in Faulty Systems Failure Detectors I. • impossibility of consensus caused by inability to detect slow process and a failed process • synchronous systems: let's use timeouts to determine whether a process has crashed • ^> Failure Detectors Failure Detectors (FDs): • a distributed oracle that provides hints about the operational status of processes (which processes had failed) • FDs communicate via atomic/time reliable broadcast • every process maintains its own FD • and asks just it to determine, whether a process had failed • however: • hints may be incorrect • FD may give different hints to different processes • FD may change its mind (over & over) about the operational status of a process Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 96 / 100 Issues Examples Fault Tolerance in Distributed Systems Fault Tolerance in DSs - Agreement in Faulty Systems Failure Detectors II. SLOW Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 Issues Examples Fault Tolerance in Distributed Systems Fault Tolerance in DSs - Agreement in Faulty Systems (Perfect) Failure Detector Perfect Failure Detector: • properties: • Eventual Strong Completeness - eventually every process that has crashed is permanently suspected by all non-crashed processes • Eventual Strong Accuracy - no correct process is ever suspected • hard to implement • is perfect failure detection necessary for consensus? No. • ^> weaker Failure Detector weaker Failure Detector: • properties: • Strong Completeness - there is a time after which every faulty process is suspected by every correct process • Eventual Strong Accuracy - there is a time after which no correct process is suspected • can be used to solve the consensus • this is the weakest FD that can be used to solve the consensus Ludek Matyska (Fl MU) 2. Distributed Systems Spring 2017 98 / 100 Conclusion Lecture overview Distributed Systems 9 Key characteristics 9 Challenges and Issues • Distributed System Architectures 9 Inter-process Communication Middleware • Remote Procedure Calls (RPC) • Remote Method Invocation (RMI) • Common Object Request Broker Architecture (CORBA) Web Services Grid Services Issues Examples 9 Scheduling/Load-balancing in Distributed Systems 9 Fault Tolerance in Distributed Systems 2. Distributed Systems Spring 2017 99 / 100 Conclusion Luděk Matýska (Fl MU) Distributed Systems - Further Information • Fl courses: • PA150: Advanced Operating Sytems Concepts (doc. Staudek) 9 PA053: Distributed Systems and Middleware (doc. Tůma) • IA039: Supercomputer Architecture and Intensive Computations (prof. Matýska) • PA177: High Performance Computing (LSU, prof. Sterling) • IV100: Parallel and distributed computations (doc. Královic) o IB109: Design and Implementation of Parallel Systems (dr. Barnat) • etc. • (Used) Literature: • W. Jia and W. Zhou. Distributed Network Systems: From concepts to implementations. Springer, 2005. 9 A. S. Tanenbaum and M. V. Steen. Distributed Systems: Principles and paradigms. Pearson Prencite Hall, 2007. • G. Coulouris, J. Doll i more, and T. Kind berg. Distributed Systems: Concepts and design. Addison-Wesley publishers, 2001. • Z. Tari and 0. Bukhres. Fundamentals of Distributed Object Systems: The CORBA perspective. John Wiley & Sons, 2001. • etc. Luděk Matýska (Fl MU) 2. Distributed Systems Spring 2017 100 / 100