IA158 Real Time Systems Tomáš Brázdil 1 Organization of This Course Sources: Lectures (slides, notes) based on several sources (hard to obtain) slides are prepared for lectures, lots of stuff on the greenboard (⇒ attend the lectures) 2 Organization of This Course Sources: Lectures (slides, notes) based on several sources (hard to obtain) slides are prepared for lectures, lots of stuff on the greenboard (⇒ attend the lectures) Homework: a larger homework project 2 Organization of This Course Sources: Lectures (slides, notes) based on several sources (hard to obtain) slides are prepared for lectures, lots of stuff on the greenboard (⇒ attend the lectures) Homework: a larger homework project Evaluation: Homework project (have to do to be allowed to the exam) Oral exam 2 Real-Time Systems Definition 1 (Time) Mirriam-Webster: Time is the measured or measurable period during which an action, process, or condition exists or continues. 3 Real-Time Systems Definition 1 (Time) Mirriam-Webster: Time is the measured or measurable period during which an action, process, or condition exists or continues. Definition 2 (Real-time) Real-time is a quantitative notion of time measured using a physical clock. Example: After an event occurs (eg. temperature exceeds 500 degrees) the corresponding action (cooling) must take place within 100ms. Compare with qualitative notion of time (before, after, eventually, etc.) 3 Real-Time Systems Definition 1 (Time) Mirriam-Webster: Time is the measured or measurable period during which an action, process, or condition exists or continues. Definition 2 (Real-time) Real-time is a quantitative notion of time measured using a physical clock. Example: After an event occurs (eg. temperature exceeds 500 degrees) the corresponding action (cooling) must take place within 100ms. Compare with qualitative notion of time (before, after, eventually, etc.) Definition 3 (Real-time system) A real-time system must deliver services in a timely manner. Not necessarily fast, must satisfy some quantitative timing constraints 3 Real-time Embedded Systems Definition 4 (Embedded system) An embedded system is a computer system designed for specific control functions within a larger system, usually consisting of electronic as well as mechanical parts. 4 Real-time Embedded Systems Definition 4 (Embedded system) An embedded system is a computer system designed for specific control functions within a larger system, usually consisting of electronic as well as mechanical parts. Most (not all) real-time systems are embedded Most (not all) embedded systems are real-time 4 (Few) Examples of Real-time Embedded Systems Industrial chemical plant control automated assembly line (e.g. robotic assembly, inspection) 5 (Few) Examples of Real-time Embedded Systems Industrial chemical plant control automated assembly line (e.g. robotic assembly, inspection) Medical pacemaker, medical monitoring devices 5 (Few) Examples of Real-time Embedded Systems Industrial chemical plant control automated assembly line (e.g. robotic assembly, inspection) Medical pacemaker, medical monitoring devices Transportation systems computers in cars (ABS, MPFI, cruise control, airbag ...) aircraft (FMS, fly-by-wire ...) 5 (Few) Examples of Real-time Embedded Systems Industrial chemical plant control automated assembly line (e.g. robotic assembly, inspection) Medical pacemaker, medical monitoring devices Transportation systems computers in cars (ABS, MPFI, cruise control, airbag ...) aircraft (FMS, fly-by-wire ...) Military applications controllers in weapons, missiles, ... radar and sonar tracking 5 (Few) Examples of Real-time Embedded Systems Industrial chemical plant control automated assembly line (e.g. robotic assembly, inspection) Medical pacemaker, medical monitoring devices Transportation systems computers in cars (ABS, MPFI, cruise control, airbag ...) aircraft (FMS, fly-by-wire ...) Military applications controllers in weapons, missiles, ... radar and sonar tracking Multimedia – multimedia center, videoconferencing ... 5 (Non-)Real-time (non-)embedded systems There are real time systems that are not embedded: trading systems ticket reservation multimedia (on PC) ... 6 (Non-)Real-time (non-)embedded systems There are real time systems that are not embedded: trading systems ticket reservation multimedia (on PC) ... There are embedded systems that are (possibly) not real-time e.g. a weather station sends data once a day without any deadline – not really real-time system Caveat: Aren’t all systems real-time in a sense? 6 Characteristics of Real-Time Embedded Systems Real-time systems often are safety critical Serious consequences may result if services are not delivered on timely basis Bugs in embedded real-time systems are often difficult to fix ... need to validate their correctness 7 Characteristics of Real-Time Embedded Systems Real-time systems often are safety critical Serious consequences may result if services are not delivered on timely basis Bugs in embedded real-time systems are often difficult to fix ... need to validate their correctness concurrent Real-world devices operate in parallel – better to model this parallelism by concurrent tasks in the program ... validation may be difficult, formal methods often needed 7 Characteristics of Real-Time Embedded Systems Real-time systems often are safety critical Serious consequences may result if services are not delivered on timely basis Bugs in embedded real-time systems are often difficult to fix ... need to validate their correctness concurrent Real-world devices operate in parallel – better to model this parallelism by concurrent tasks in the program ... validation may be difficult, formal methods often needed reactive Interact continuously with their environment (as opposed to information processing systems) ... “traditional” validation methods do not apply 7 Validating Time Requirements and Predictability Given real-time requirements and an implementation on HW and SW, how to show that the requirements are met? 8 Validating Time Requirements and Predictability Given real-time requirements and an implementation on HW and SW, how to show that the requirements are met? ... testing might not suffice: Maiden flight of space shuttle, 12 April 1981: 1/67 probability that a transient overload occurs during initialization; and it actually did! 8 Validating Time Requirements and Predictability Given real-time requirements and an implementation on HW and SW, how to show that the requirements are met? ... testing might not suffice: Maiden flight of space shuttle, 12 April 1981: 1/67 probability that a transient overload occurs during initialization; and it actually did! We need a formal model and validation ... 8 Validating Time Requirements and Predictability Given real-time requirements and an implementation on HW and SW, how to show that the requirements are met? ... testing might not suffice: Maiden flight of space shuttle, 12 April 1981: 1/67 probability that a transient overload occurs during initialization; and it actually did! We need a formal model and validation ... ... we need predictable behavior! It is difficult to obtain caches, DMA, unmaskable interrupts memory management scheduling anomalies difficult to compute worst-case execution time ... 8 Types of Timing Requirements Time sharing systems: minimize average response time The goal of scheduling in standard op. systems such as Linux and Windows 9 Types of Timing Requirements Time sharing systems: minimize average response time The goal of scheduling in standard op. systems such as Linux and Windows Often it is not enough to minimize average response time! (A man drowned crossing a stream with an average depth of 15cm.) 9 Types of Timing Requirements Time sharing systems: minimize average response time The goal of scheduling in standard op. systems such as Linux and Windows Often it is not enough to minimize average response time! (A man drowned crossing a stream with an average depth of 15cm.) “hard” real-time tasks must be always finished before their deadline! e.g. airbag in a car: whenever a collision is detected, the airbag must be deployed within 10ms 9 Types of Timing Requirements Time sharing systems: minimize average response time The goal of scheduling in standard op. systems such as Linux and Windows Often it is not enough to minimize average response time! (A man drowned crossing a stream with an average depth of 15cm.) “hard” real-time tasks must be always finished before their deadline! e.g. airbag in a car: whenever a collision is detected, the airbag must be deployed within 10ms Not all tasks in a real-time system are critical, only the quality of service is affected by missing a deadline 9 Types of Timing Requirements Time sharing systems: minimize average response time The goal of scheduling in standard op. systems such as Linux and Windows Often it is not enough to minimize average response time! (A man drowned crossing a stream with an average depth of 15cm.) “hard” real-time tasks must be always finished before their deadline! e.g. airbag in a car: whenever a collision is detected, the airbag must be deployed within 10ms Not all tasks in a real-time system are critical, only the quality of service is affected by missing a deadline Most “soft” real-time tasks should finish before their deadlines. e.g. frame rate in a videoconf. should be kept above 15fps most of the time 9 Types of Timing Requirements Time sharing systems: minimize average response time The goal of scheduling in standard op. systems such as Linux and Windows Often it is not enough to minimize average response time! (A man drowned crossing a stream with an average depth of 15cm.) “hard” real-time tasks must be always finished before their deadline! e.g. airbag in a car: whenever a collision is detected, the airbag must be deployed within 10ms Not all tasks in a real-time system are critical, only the quality of service is affected by missing a deadline Most “soft” real-time tasks should finish before their deadlines. e.g. frame rate in a videoconf. should be kept above 15fps most of the time Many real-time systems combine “hard” and “soft” real-time tasks. i.e. we optimize performance w.r.t. “soft” real-time tasks under the constraint that “hard” real-time tasks are finished before their deadlines 9 Examples of Real-Time Systems Digital process control anti-lock braking system Higher-level command and control helicopter flight control Real-time databases Stock trading systems 10 Digital Process Control Computer controls the flow in the pipe in real-time 11 Digital Process Control The controller (computer) controls the plant using the actuator (valve) based on sampled data from the sensor (flow meter) y(t) – the measured state of the plant r(t) – the desired state of the plant Calculate control output u(t) as a function of y(t), r(t) e.g. uk = uk−2 + α(rk − yk ) + β(rk−1 − yk−1) + γ(rk−2 − yk−2) where α, β, γ are suitable constants 12 Digital Process Control Pseudo-code for the controller: set timer to interrupt periodically with period T foreach timer interrupt do analogue-to-digital conversion of y(t) to get yk compute control output uk based on rk and yk digital-to-analogue conversion of uk to get u(t) end 13 Digital Process Control Pseudo-code for the controller: set timer to interrupt periodically with period T foreach timer interrupt do analogue-to-digital conversion of y(t) to get yk compute control output uk based on rk and yk digital-to-analogue conversion of uk to get u(t) end Effective control of the plant depends on: The correct reference input and control law computation The accuracy of the sensor measurements Resolution of the sampled data (i.e. bits per sample) Frequency of interrupts (i.e. 1/T) 13 Digital Process Control Pseudo-code for the controller: set timer to interrupt periodically with period T foreach timer interrupt do analogue-to-digital conversion of y(t) to get yk compute control output uk based on rk and yk digital-to-analogue conversion of uk to get u(t) end Effective control of the plant depends on: The correct reference input and control law computation The accuracy of the sensor measurements Resolution of the sampled data (i.e. bits per sample) Frequency of interrupts (i.e. 1/T) T is the sampling period Small T better approximates the analogue behavior Large T means less processor-time demand ... but may result in unstable control 13 Example r(t) = 1 for t ≥ 0 14 Example r(t) = 1 for t ≥ 0 14 Example r(t) = 1 for t ≥ 0 14 Anti-Lock Braking System The controller monitors the speed sensors in wheels Right before a wheel locks up, it experiences a rapid deceleration 15 Anti-Lock Braking System The controller monitors the speed sensors in wheels Right before a wheel locks up, it experiences a rapid deceleration If a rapid deceleration of a wheel is observed, the controller alternately reduces pressure on the corresponding brake until acceleration is observed then applies brake until deceleration is observed 15 Multi-Rate DPC – Helicopter Flight Control There are also three velocity components Two control loops: pilot’s control (30Hz) and stabilization (90Hz) 16 Multi-Rate DPC – Helicopter Flight Control Do the following in each 1/180-second cycle: Validate sensor data; in the presence of failures, reconfigure the system 17 Multi-Rate DPC – Helicopter Flight Control Do the following in each 1/180-second cycle: Validate sensor data; in the presence of failures, reconfigure the system Do the following 30-Hz avionics tasks, each one every six cycles: keyboard input and mode selection data normalization and coordinate transformation tracking reference update 17 Multi-Rate DPC – Helicopter Flight Control Do the following in each 1/180-second cycle: Validate sensor data; in the presence of failures, reconfigure the system Do the following 30-Hz avionics tasks, each one every six cycles: keyboard input and mode selection data normalization and coordinate transformation tracking reference update Do the following 30-Hz avionics tasks, each one every six cycles: control laws of the outer pitch-control loop control laws of the outer roll-control loop control laws of the outer yaw- and collective-control loop 17 Multi-Rate DPC – Helicopter Flight Control Do the following in each 1/180-second cycle: Validate sensor data; in the presence of failures, reconfigure the system Do the following 30-Hz avionics tasks, each one every six cycles: keyboard input and mode selection data normalization and coordinate transformation tracking reference update Do the following 30-Hz avionics tasks, each one every six cycles: control laws of the outer pitch-control loop control laws of the outer roll-control loop control laws of the outer yaw- and collective-control loop Do each of the following 90-Hz computations once every two cycles, using outputs produced by 30-Hz computations and avionics tasks: control laws of the inner pitch-control loop control laws of the inner roll- and collective-control loop Compute the control laws of the inner yaw-control loop, using outputs produced by 90-Hz control-law computations as inputs 17 Multi-Rate DPC – Helicopter Flight Control Do the following in each 1/180-second cycle: Validate sensor data; in the presence of failures, reconfigure the system Do the following 30-Hz avionics tasks, each one every six cycles: keyboard input and mode selection data normalization and coordinate transformation tracking reference update Do the following 30-Hz avionics tasks, each one every six cycles: control laws of the outer pitch-control loop control laws of the outer roll-control loop control laws of the outer yaw- and collective-control loop Do each of the following 90-Hz computations once every two cycles, using outputs produced by 30-Hz computations and avionics tasks: control laws of the inner pitch-control loop control laws of the inner roll- and collective-control loop Compute the control laws of the inner yaw-control loop, using outputs produced by 90-Hz control-law computations as inputs Output commands Carry out built-in-test Wait until the beginning of the next cycle 17 Higher-Level Command and Control Controllers organized into a hierarchy At the lowest level we place the digital control systems that operate on the physical environment Higher level controllers monitor the behavior of lower levels Time-scale and complexity of decision making increases as one goes up the hierarchy (from control to planning) 18 Real-Time Database System Databases that contain perishable data, i.e. relevance of data deteriorates with time Air traffic control, stock price quotation systems, tracking systems, etc. 19 Real-Time Database System Databases that contain perishable data, i.e. relevance of data deteriorates with time Air traffic control, stock price quotation systems, tracking systems, etc. The temporal quality of data is quantified by age of an image object, i.e. the length of time since last update 19 Real-Time Database System Databases that contain perishable data, i.e. relevance of data deteriorates with time Air traffic control, stock price quotation systems, tracking systems, etc. The temporal quality of data is quantified by age of an image object, i.e. the length of time since last update temporal consistency absolute = max. age is bounded by a fixed threshold relative = max. difference in ages is bounded by a threshold e.g. planning system correlating traffic density and flow of vehicles Users of database compete for access – various models for trading consistency with time demands exist. 19 Stock-Trading System A system for selling/buying stock at public prices 20 Stock-Trading System A system for selling/buying stock at public prices Prices are volatile in their movement 20 Stock-Trading System A system for selling/buying stock at public prices Prices are volatile in their movement Stop orders: set upper limit on prices for buying – buy for the best available price once the limit is reached e.g. stock currently trading at $30 should be bought when the price rises above $35 20 Stock-Trading System A system for selling/buying stock at public prices Prices are volatile in their movement Stop orders: set upper limit on prices for buying – buy for the best available price once the limit is reached e.g. stock currently trading at $30 should be bought when the price rises above $35 set lower limit on prices for selling – sell for the best available price once the limit is reached e.g. stock currently trading at $30 should be sold when the price sinks below $25 20 Stock-Trading System A system for selling/buying stock at public prices Prices are volatile in their movement Stop orders: set upper limit on prices for buying – buy for the best available price once the limit is reached e.g. stock currently trading at $30 should be bought when the price rises above $35 set lower limit on prices for selling – sell for the best available price once the limit is reached e.g. stock currently trading at $30 should be sold when the price sinks below $25 Depending on the delay, the available price may be different from the limit successful stop orders depend on the timely delivery of stock trade data and the ability to trade on the changing prices in a timely manner 20 Structure of Real-Time (Embedded) Applications 21 Types of Real-Time Systems Purely cyclic every task executes periodically; I/O operations are polled; demands in resources do not vary e.g. digital controllers 22 Types of Real-Time Systems Purely cyclic every task executes periodically; I/O operations are polled; demands in resources do not vary e.g. digital controllers Mostly cyclic most tasks execute periodically; system also responds to external events (fault recovery and external commands) asynchronously e.g. avionics 22 Types of Real-Time Systems Purely cyclic every task executes periodically; I/O operations are polled; demands in resources do not vary e.g. digital controllers Mostly cyclic most tasks execute periodically; system also responds to external events (fault recovery and external commands) asynchronously e.g. avionics Asynchronous and somewhat predictable durations between consecutive executions of a task as well as demands in resources may vary considerably. These variations have either bounded range, or known statistics. e.g. radar signal processing, tracking 22 Types of Real-Time Systems The type of application affects how we schedule tasks and prove correctness It is easier to reason about applications that are more cyclic, synchronous and predictable Many real-time systems are designed in this manner Safe, conservative, design approach, if it works 23 Real-Time Systems Failures AT&T long distance calls Therac-25 medical accelerator disaster Patriot missile mistiming 24 AT&T Long Distance Calls 114 computer-operated electronic switches scattered across USA Handling up to 700,000 calls an hour The problem: 25 AT&T Long Distance Calls 114 computer-operated electronic switches scattered across USA Handling up to 700,000 calls an hour The problem: the switch in New York City neared its load limit 25 AT&T Long Distance Calls 114 computer-operated electronic switches scattered across USA Handling up to 700,000 calls an hour The problem: the switch in New York City neared its load limit entered a four-second maintenance reset 25 AT&T Long Distance Calls 114 computer-operated electronic switches scattered across USA Handling up to 700,000 calls an hour The problem: the switch in New York City neared its load limit entered a four-second maintenance reset sent “do not disturb” to neighbors 25 AT&T Long Distance Calls 114 computer-operated electronic switches scattered across USA Handling up to 700,000 calls an hour The problem: the switch in New York City neared its load limit entered a four-second maintenance reset sent “do not disturb” to neighbors after the reset, the switch began to distribute calls (quickly) 25 AT&T Long Distance Calls 114 computer-operated electronic switches scattered across USA Handling up to 700,000 calls an hour The problem: the switch in New York City neared its load limit entered a four-second maintenance reset sent “do not disturb” to neighbors after the reset, the switch began to distribute calls (quickly) then another switch received one of these calls from New York 25 AT&T Long Distance Calls 114 computer-operated electronic switches scattered across USA Handling up to 700,000 calls an hour The problem: the switch in New York City neared its load limit entered a four-second maintenance reset sent “do not disturb” to neighbors after the reset, the switch began to distribute calls (quickly) then another switch received one of these calls from New York began to update its records that New York was back on line 25 AT&T Long Distance Calls 114 computer-operated electronic switches scattered across USA Handling up to 700,000 calls an hour The problem: the switch in New York City neared its load limit entered a four-second maintenance reset sent “do not disturb” to neighbors after the reset, the switch began to distribute calls (quickly) then another switch received one of these calls from New York began to update its records that New York was back on line a second call from New York arrived less than 10 milliseconds after the first, i.e. while the first hadn’t yet been handled; this together with a SW bug caused maintenance reset 25 AT&T Long Distance Calls 114 computer-operated electronic switches scattered across USA Handling up to 700,000 calls an hour The problem: the switch in New York City neared its load limit entered a four-second maintenance reset sent “do not disturb” to neighbors after the reset, the switch began to distribute calls (quickly) then another switch received one of these calls from New York began to update its records that New York was back on line a second call from New York arrived less than 10 milliseconds after the first, i.e. while the first hadn’t yet been handled; this together with a SW bug caused maintenance reset the error was propagated further .... 25 AT&T Long Distance Calls 114 computer-operated electronic switches scattered across USA Handling up to 700,000 calls an hour The problem: the switch in New York City neared its load limit entered a four-second maintenance reset sent “do not disturb” to neighbors after the reset, the switch began to distribute calls (quickly) then another switch received one of these calls from New York began to update its records that New York was back on line a second call from New York arrived less than 10 milliseconds after the first, i.e. while the first hadn’t yet been handled; this together with a SW bug caused maintenance reset the error was propagated further .... The reason for failure: The system was unable to react to closely timed messages 25 Therac-25 medical accelerator disaster Therac-25 = a machine for radiotheratpy between 1985 and 1987 (at least) six accidents involving enormous radiation overdoses to patients Half of these patients died due to the overdoses 26 Therac-25 – the modes 1. electron mode electron beam (low current) various levels of energy (5 to 25-MeV) scanning magnets used to spread the beam to a safe concentration 27 Therac-25 – the modes 1. electron mode electron beam (low current) various levels of energy (5 to 25-MeV) scanning magnets used to spread the beam to a safe concentration 2. photon mode only one level of energy (25-MeV), much larger electron-beam current electron beam strikes a metal foil to produce X-rays (photons) the X-ray beam is "flattened" by a device below the foil 27 Therac-25 – the modes 1. electron mode electron beam (low current) various levels of energy (5 to 25-MeV) scanning magnets used to spread the beam to a safe concentration 2. photon mode only one level of energy (25-MeV), much larger electron-beam current electron beam strikes a metal foil to produce X-rays (photons) the X-ray beam is "flattened" by a device below the foil 3. light mode – just light beam used to illuminate the field on the surface of the patient’s body that will be treated 27 Therac-25 – the modes 1. electron mode electron beam (low current) various levels of energy (5 to 25-MeV) scanning magnets used to spread the beam to a safe concentration 2. photon mode only one level of energy (25-MeV), much larger electron-beam current electron beam strikes a metal foil to produce X-rays (photons) the X-ray beam is "flattened" by a device below the foil 3. light mode – just light beam used to illuminate the field on the surface of the patient’s body that will be treated All devices placed on a turntable, supposed to be rotated to the correct position before the beam is started up 27 Therac-25 – turntable 28 The Software The software responsible for Operator Monitoring input and editing changes from an operator Updating the screen to show current status of machine Printing in response to an operator commands 29 The Software The software responsible for Operator Monitoring input and editing changes from an operator Updating the screen to show current status of machine Printing in response to an operator commands Machine monitoring the machine status placement of turntable strength and shape of beam operation of bending and scanning magnets setting the machine up for the specified treatment turning the beam on turning the beam off (after treatment, on operator command, or if a malfunction is detected) 29 The Software The software responsible for Operator Monitoring input and editing changes from an operator Updating the screen to show current status of machine Printing in response to an operator commands Machine monitoring the machine status placement of turntable strength and shape of beam operation of bending and scanning magnets setting the machine up for the specified treatment turning the beam on turning the beam off (after treatment, on operator command, or if a malfunction is detected) Software running several safety critical tasks in parallel! Insufficient hardware protection (as opposed to previous models)!! 29 Therac-25 – software The Therac-25 runs on a real-time operating system 30 Therac-25 – software The Therac-25 runs on a real-time operating system Four major components of software: stored data, a scheduler, a set of tasks, and interrupt services (e.g. the computer clock and handling of computer-hardware-generated errors) 30 Therac-25 – software The Therac-25 runs on a real-time operating system Four major components of software: stored data, a scheduler, a set of tasks, and interrupt services (e.g. the computer clock and handling of computer-hardware-generated errors) The software segregated the tasks above into critical tasks: e.g. setup and operation of the beam non-critical tasks: e.g. monitoring the keyboard 30 Therac-25 – software The Therac-25 runs on a real-time operating system Four major components of software: stored data, a scheduler, a set of tasks, and interrupt services (e.g. the computer clock and handling of computer-hardware-generated errors) The software segregated the tasks above into critical tasks: e.g. setup and operation of the beam non-critical tasks: e.g. monitoring the keyboard The scheduler directs all non-interrupt events and orders simultaneous events 30 Therac-25 – software The Therac-25 runs on a real-time operating system Four major components of software: stored data, a scheduler, a set of tasks, and interrupt services (e.g. the computer clock and handling of computer-hardware-generated errors) The software segregated the tasks above into critical tasks: e.g. setup and operation of the beam non-critical tasks: e.g. monitoring the keyboard The scheduler directs all non-interrupt events and orders simultaneous events Every 0.1 seconds tasks are initiated and critical tasks are executed first, with non-critical tasks taking up any remaining time 30 Therac-25 – software The Therac-25 runs on a real-time operating system Four major components of software: stored data, a scheduler, a set of tasks, and interrupt services (e.g. the computer clock and handling of computer-hardware-generated errors) The software segregated the tasks above into critical tasks: e.g. setup and operation of the beam non-critical tasks: e.g. monitoring the keyboard The scheduler directs all non-interrupt events and orders simultaneous events Every 0.1 seconds tasks are initiated and critical tasks are executed first, with non-critical tasks taking up any remaining time Communication between tasks based on shared variables (without proper atomic test-and-set instructions) 30 What happened? There were several accidents due to various bugs in software 31 What happened? There were several accidents due to various bugs in software One of them proceeded as follows (much simplified): the operator entered parameters for X-rays treatment 31 What happened? There were several accidents due to various bugs in software One of them proceeded as follows (much simplified): the operator entered parameters for X-rays treatment the machine started to set up for the treatment 31 What happened? There were several accidents due to various bugs in software One of them proceeded as follows (much simplified): the operator entered parameters for X-rays treatment the machine started to set up for the treatment the operator changed the mode from X-rays to electron (within the interval from 1s to 8s from the end of the original editing) 31 What happened? There were several accidents due to various bugs in software One of them proceeded as follows (much simplified): the operator entered parameters for X-rays treatment the machine started to set up for the treatment the operator changed the mode from X-rays to electron (within the interval from 1s to 8s from the end of the original editing) the patient received X-ray “treatment” with turntable in the electron position (i.e. unshielded) 31 What happened? There were several accidents due to various bugs in software One of them proceeded as follows (much simplified): the operator entered parameters for X-rays treatment the machine started to set up for the treatment the operator changed the mode from X-rays to electron (within the interval from 1s to 8s from the end of the original editing) the patient received X-ray “treatment” with turntable in the electron position (i.e. unshielded) The cause: The turntable and treatment parameters were set by different concurrent procedures Hand and Datent, respectively. 31 What happened? There were several accidents due to various bugs in software One of them proceeded as follows (much simplified): the operator entered parameters for X-rays treatment the machine started to set up for the treatment the operator changed the mode from X-rays to electron (within the interval from 1s to 8s from the end of the original editing) the patient received X-ray “treatment” with turntable in the electron position (i.e. unshielded) The cause: The turntable and treatment parameters were set by different concurrent procedures Hand and Datent, respectively. If the change in parameters came in the “right” time, only Hand reacted to the change. 31 Patriot missile mistiming vs 32 Patriot missile mistiming Patriot – Air defense missile system 33 Patriot missile mistiming Patriot – Air defense missile system Failed to intercept a scud missile on February 25, 1991 at Dhahran, Saudi Arabia (missile hit US army barracks, 28 persons killed) 33 Patriot missile mistiming Patriot – Air defense missile system Failed to intercept a scud missile on February 25, 1991 at Dhahran, Saudi Arabia (missile hit US army barracks, 28 persons killed) The problem was caused by incorrect measurement of time 33 Patriot missile mistiming Patriot – Air defense missile system Failed to intercept a scud missile on February 25, 1991 at Dhahran, Saudi Arabia (missile hit US army barracks, 28 persons killed) The problem was caused by incorrect measurement of time Simplified principle of function: 33 Patriot missile mistiming Patriot – Air defense missile system Failed to intercept a scud missile on February 25, 1991 at Dhahran, Saudi Arabia (missile hit US army barracks, 28 persons killed) The problem was caused by incorrect measurement of time Simplified principle of function: Patriot’s radar detects an airborne object 33 Patriot missile mistiming Patriot – Air defense missile system Failed to intercept a scud missile on February 25, 1991 at Dhahran, Saudi Arabia (missile hit US army barracks, 28 persons killed) The problem was caused by incorrect measurement of time Simplified principle of function: Patriot’s radar detects an airborne object the object is identified as a scud missile (according to speed, size, etc.) 33 Patriot missile mistiming Patriot – Air defense missile system Failed to intercept a scud missile on February 25, 1991 at Dhahran, Saudi Arabia (missile hit US army barracks, 28 persons killed) The problem was caused by incorrect measurement of time Simplified principle of function: Patriot’s radar detects an airborne object the object is identified as a scud missile (according to speed, size, etc.) the range gate computes an area in the air space where the system should next look for it 33 Patriot missile mistiming Patriot – Air defense missile system Failed to intercept a scud missile on February 25, 1991 at Dhahran, Saudi Arabia (missile hit US army barracks, 28 persons killed) The problem was caused by incorrect measurement of time Simplified principle of function: Patriot’s radar detects an airborne object the object is identified as a scud missile (according to speed, size, etc.) the range gate computes an area in the air space where the system should next look for it finding the object in the calculated area confirms that it is a scud 33 Patriot missile mistiming Patriot – Air defense missile system Failed to intercept a scud missile on February 25, 1991 at Dhahran, Saudi Arabia (missile hit US army barracks, 28 persons killed) The problem was caused by incorrect measurement of time Simplified principle of function: Patriot’s radar detects an airborne object the object is identified as a scud missile (according to speed, size, etc.) the range gate computes an area in the air space where the system should next look for it finding the object in the calculated area confirms that it is a scud then the scud is intercepted 33 Patriot Missile Mistiming 34 Patriot Missile Mistiming Prediction of the new area: 35 Patriot Missile Mistiming Prediction of the new area: a function of velocity and time of the last radar detection 35 Patriot Missile Mistiming Prediction of the new area: a function of velocity and time of the last radar detection velocity represented as a real number 35 Patriot Missile Mistiming Prediction of the new area: a function of velocity and time of the last radar detection velocity represented as a real number the current time kept by incrementing whole number counter counting tenths of seconds 35 Patriot Missile Mistiming Prediction of the new area: a function of velocity and time of the last radar detection velocity represented as a real number the current time kept by incrementing whole number counter counting tenths of seconds computation in 24bit fixed floating point numbers 35 Patriot Missile Mistiming Prediction of the new area: a function of velocity and time of the last radar detection velocity represented as a real number the current time kept by incrementing whole number counter counting tenths of seconds computation in 24bit fixed floating point numbers The time converted to 24bit real number and multiplied with 1/10 represented in 24bit (i.e. the real value of 1/10 was 0.099999905) 35 Patriot Missile Mistiming Prediction of the new area: a function of velocity and time of the last radar detection velocity represented as a real number the current time kept by incrementing whole number counter counting tenths of seconds computation in 24bit fixed floating point numbers The time converted to 24bit real number and multiplied with 1/10 represented in 24bit (i.e. the real value of 1/10 was 0.099999905) the system was already running for 100 hours, i.e. the counter value was 360000, i.e. 360000 · 0.099999905 = 35999.6568 35 Patriot Missile Mistiming Prediction of the new area: a function of velocity and time of the last radar detection velocity represented as a real number the current time kept by incrementing whole number counter counting tenths of seconds computation in 24bit fixed floating point numbers The time converted to 24bit real number and multiplied with 1/10 represented in 24bit (i.e. the real value of 1/10 was 0.099999905) the system was already running for 100 hours, i.e. the counter value was 360000, i.e. 360000 · 0.099999905 = 35999.6568 the error was 0.3432 seconds, which means 687 m off MACH 5 scud missile 35 Patriot Missile Mistiming Prediction of the new area: a function of velocity and time of the last radar detection velocity represented as a real number the current time kept by incrementing whole number counter counting tenths of seconds computation in 24bit fixed floating point numbers The time converted to 24bit real number and multiplied with 1/10 represented in 24bit (i.e. the real value of 1/10 was 0.099999905) the system was already running for 100 hours, i.e. the counter value was 360000, i.e. 360000 · 0.099999905 = 35999.6568 the error was 0.3432 seconds, which means 687 m off MACH 5 scud missile the problem was not only in wrong conversion but in the fact that at some points correct conversion was used (after incomplete bug fix), so the errors did not cancel out 35 Patriot Missile Mistiming Prediction of the new area: a function of velocity and time of the last radar detection velocity represented as a real number the current time kept by incrementing whole number counter counting tenths of seconds computation in 24bit fixed floating point numbers The time converted to 24bit real number and multiplied with 1/10 represented in 24bit (i.e. the real value of 1/10 was 0.099999905) the system was already running for 100 hours, i.e. the counter value was 360000, i.e. 360000 · 0.099999905 = 35999.6568 the error was 0.3432 seconds, which means 687 m off MACH 5 scud missile the problem was not only in wrong conversion but in the fact that at some points correct conversion was used (after incomplete bug fix), so the errors did not cancel out As a result, the tracking gate looked into wrong area 35 Patriot Missile Mistiming 36 Starliner Developed by Boeing & NASA Seven passengers, or a mix of crew and cargo, for missions to low-Earth orbit 37 Starliner Developed by Boeing & NASA Seven passengers, or a mix of crew and cargo, for missions to low-Earth orbit A timing issue occured on the last Orbital Flight Test on December 20, 2019 37 Starliner Developed by Boeing & NASA Seven passengers, or a mix of crew and cargo, for missions to low-Earth orbit A timing issue occured on the last Orbital Flight Test on December 20, 2019 What is supposed to happen: Atlas V leaves Starliner on a suborbital trajectory. Starliner’s own propulsion system takes the spacecraft into orbit and to ISS. 37 Starliner Developed by Boeing & NASA Seven passengers, or a mix of crew and cargo, for missions to low-Earth orbit A timing issue occured on the last Orbital Flight Test on December 20, 2019 What is supposed to happen: Atlas V leaves Starliner on a suborbital trajectory. Starliner’s own propulsion system takes the spacecraft into orbit and to ISS. What happened: Mission Elapsed Timer (MET), or clock, on Starliner was set to the wrong time and did not trigger the engines to fire correctly. Other onboard systems compensated and it reached orbit, but had depleted so much fuel there was not enough to continue the journey. 37 (Rough) Course Outline Real-time scheduling Time and priority driven Resource control Multi-processor (a bit) 38 (Rough) Course Outline Real-time scheduling Time and priority driven Resource control Multi-processor (a bit) A little bit on programming real-time systems Real-time operating systems 38 Outline – Scheduling The Scheduling problem: Input: available processors, resources set of tasks/jobs with their requirements, deadlines, etc. 39 Outline – Scheduling The Scheduling problem: Input: available processors, resources set of tasks/jobs with their requirements, deadlines, etc. Question: How to assign processors and resources to tasks/jobs so that all requirements are met? 39 Outline – Scheduling The Scheduling problem: Input: available processors, resources set of tasks/jobs with their requirements, deadlines, etc. Question: How to assign processors and resources to tasks/jobs so that all requirements are met? Example: 1 processor, one critical section shared by job 1 and job 3 job 1: release time 1, computation time 4, deadline 8 job 2: release time 1, computation time 2, deadline 5 job 3: release time 0, computation time 3, deadline 4 ... 39 Outline – Scheduling We consider a formal model of systems with parallel jobs that possibly contend for shared resources consider periodic as well as aperiodic jobs Consider various algorithms that schedule jobs to meet their timing constraints offline and online algorithms, RM, EDF, etc. 40 Outline – Programming Basic information about RTOS and RT programming languages RTOS – overview real-time in non-real-time operating systems implementation of theoretical concepts in freeRTOS RT in programming languages – short overview 41 Real-Time Scheduling Formal Model [Some parts of this lecture are based on a real-time systems course of Colin Perkins http://csperkins.org/teaching/rtes/index.html] 42 Real-Time Scheduling – Formal Model Introduce an abstract model of real-time systems abstracts away unessential details sets up consistent terminology 43 Real-Time Scheduling – Formal Model Introduce an abstract model of real-time systems abstracts away unessential details sets up consistent terminology Three components of the model A workload model that describes applications supported by the system i.e. jobs, tasks, ... A resource model that describes the system resources available to applications i.e. processors, passive resources, ... Algorithms that define how the application uses the resources at all times i.e. scheduling and resource access protocols 43 Basic Notions A job is a unit of work that is scheduled and executed by a system compute a control law, transform sensor data, etc. 44 Basic Notions A job is a unit of work that is scheduled and executed by a system compute a control law, transform sensor data, etc. A task is a set of related jobs which jointly provide some system function check temperature periodically, keep a steady flow of water 44 Basic Notions A job is a unit of work that is scheduled and executed by a system compute a control law, transform sensor data, etc. A task is a set of related jobs which jointly provide some system function check temperature periodically, keep a steady flow of water A job executes on a processor CPU, transmission link in a network, database server, etc. 44 Basic Notions A job is a unit of work that is scheduled and executed by a system compute a control law, transform sensor data, etc. A task is a set of related jobs which jointly provide some system function check temperature periodically, keep a steady flow of water A job executes on a processor CPU, transmission link in a network, database server, etc. A job may use some (shared) passive resources file, database lock, shared variable etc. 44 Life Cycle of a Job READY RUN WAITING COMPL. scheduling preemption wait for a busy resource signal free resource release completed 45 Jobs – Parameters We consider finite, or countably infinte number of jobs J1, J2, . . . Each job has several parameters. 46 Jobs – Parameters We consider finite, or countably infinte number of jobs J1, J2, . . . Each job has several parameters. There are four types of job parameters: temporal release time, execution time, deadlines functional Laxity type: hard and soft real-time preemptability, (criticality) interconnection precedence constraints resource usage of processors and passive resources 46 Job Parameters – Execution Time Execution time ei of a job Ji – the amount of time required to complete the execution of Ji when it executes alone and has all necessary resources 47 Job Parameters – Execution Time Execution time ei of a job Ji – the amount of time required to complete the execution of Ji when it executes alone and has all necessary resources Value of ei depends upon complexity of the job and speed of the processor on which it executes; may change for various reasons: Conditional branches Caches, pipelines, etc. ... Execution times fall into an interval [e− i , e+ i ]; we assume that we know this interval (WCET analysis) but not necessarily ei 47 Job Parameters – Execution Time Execution time ei of a job Ji – the amount of time required to complete the execution of Ji when it executes alone and has all necessary resources Value of ei depends upon complexity of the job and speed of the processor on which it executes; may change for various reasons: Conditional branches Caches, pipelines, etc. ... Execution times fall into an interval [e− i , e+ i ]; we assume that we know this interval (WCET analysis) but not necessarily ei We usually validate the system using only e+ i for each job i.e. assume ei = e+ i 47 Job Parameters – Release and Response Time Release time ri – the instant in time when a job Ji becomes available for execution Release time may jitter, only an interval [r− i , r+ i ] is known A job can be executed at any time at, or after, its release time, provided its processor and resource demands are met 48 Job Parameters – Release and Response Time Release time ri – the instant in time when a job Ji becomes available for execution Release time may jitter, only an interval [r− i , r+ i ] is known A job can be executed at any time at, or after, its release time, provided its processor and resource demands are met Completion time Ci – the instant in time when a job completes its execution 48 Job Parameters – Release and Response Time Release time ri – the instant in time when a job Ji becomes available for execution Release time may jitter, only an interval [r− i , r+ i ] is known A job can be executed at any time at, or after, its release time, provided its processor and resource demands are met Completion time Ci – the instant in time when a job completes its execution Response time – the difference Ci − ri between the completion time and the release time Time Ji Ji r− i r+ i Release time ri Completion time Ci Response time 48 Job Parameters – Deadlines Absolute deadline di – the instant in time by which a job must be completed 49 Job Parameters – Deadlines Absolute deadline di – the instant in time by which a job must be completed Relative deadline Di – the maximum allowable response time i.e. Di = di − ri 49 Job Parameters – Deadlines Absolute deadline di – the instant in time by which a job must be completed Relative deadline Di – the maximum allowable response time i.e. Di = di − ri Feasible interval is the interval (ri, di] Time Ji Ji r− i r+ i Release time ri Completion time Ci Response time Absolute deadline di Rel. deadline Di 49 Job Parameters – Deadlines Absolute deadline di – the instant in time by which a job must be completed Relative deadline Di – the maximum allowable response time i.e. Di = di − ri Feasible interval is the interval (ri, di] Time Ji Ji r− i r+ i Release time ri Completion time Ci Response time Absolute deadline di Rel. deadline Di A timing constraint of a job is specified using release time together with relative and absolute deadlines. 49 Laxity Type – Hard Real-Time A hard real-time constraint specifies that a job should never miss its deadline. Examples: Flight control, railway signaling, anti-lock brakes, etc. 50 Laxity Type – Hard Real-Time A hard real-time constraint specifies that a job should never miss its deadline. Examples: Flight control, railway signaling, anti-lock brakes, etc. Several more precise definitions occur in literature: A timing constraint is hard if the failure to meet it is considered a fatal error e.g. a bomb is dropped too late and hits civilians 50 Laxity Type – Hard Real-Time A hard real-time constraint specifies that a job should never miss its deadline. Examples: Flight control, railway signaling, anti-lock brakes, etc. Several more precise definitions occur in literature: A timing constraint is hard if the failure to meet it is considered a fatal error e.g. a bomb is dropped too late and hits civilians A timing constraint is hard if the usefulness of the results falls off abruptly (may even become negative) at the deadline Here the nature of abruptness allows to soften the constraint 50 Laxity Type – Hard Real-Time A hard real-time constraint specifies that a job should never miss its deadline. Examples: Flight control, railway signaling, anti-lock brakes, etc. Several more precise definitions occur in literature: A timing constraint is hard if the failure to meet it is considered a fatal error e.g. a bomb is dropped too late and hits civilians A timing constraint is hard if the usefulness of the results falls off abruptly (may even become negative) at the deadline Here the nature of abruptness allows to soften the constraint Definition 5 A timing constraint is hard if the user requires formal validation that the job meets its timing constraint. 50 Laxity Type – Soft Real-Time A soft real-time constraint specifies that a job could occasionally miss its deadline Examples: stock trading, multimedia, etc. 51 Laxity Type – Soft Real-Time A soft real-time constraint specifies that a job could occasionally miss its deadline Examples: stock trading, multimedia, etc. Several more precise definitions occur in literature: A timing constraint is soft if the failure to meet it is undesirable but acceptable if the probability is low 51 Laxity Type – Soft Real-Time A soft real-time constraint specifies that a job could occasionally miss its deadline Examples: stock trading, multimedia, etc. Several more precise definitions occur in literature: A timing constraint is soft if the failure to meet it is undesirable but acceptable if the probability is low A timing constraint is soft if the usefulness of the results decreases at a slower rate with tardiness of the job e.g. the probability that a response time exceeds 50 ms is less than 0.2 51 Laxity Type – Soft Real-Time A soft real-time constraint specifies that a job could occasionally miss its deadline Examples: stock trading, multimedia, etc. Several more precise definitions occur in literature: A timing constraint is soft if the failure to meet it is undesirable but acceptable if the probability is low A timing constraint is soft if the usefulness of the results decreases at a slower rate with tardiness of the job e.g. the probability that a response time exceeds 50 ms is less than 0.2 Definition 6 A timing constraint is soft if either validation is not required, or only a demonstration that a statistical constraint is met suffices. 51 Jobs – Preemptability Jobs may be interrupted by higher priority jobs 52 Jobs – Preemptability Jobs may be interrupted by higher priority jobs A job is preemptable if its execution can be interrupted A job is non-preemptable if it must run to completion once started (Some preemptable jobs have periods during which they cannot be preempted) The context switch time is the time to switch between jobs (Most of the time we assume that this time is negligible) 52 Jobs – Preemptability Jobs may be interrupted by higher priority jobs A job is preemptable if its execution can be interrupted A job is non-preemptable if it must run to completion once started (Some preemptable jobs have periods during which they cannot be preempted) The context switch time is the time to switch between jobs (Most of the time we assume that this time is negligible) Reasons for preemptability: Jobs may have different levels of criticality e.g. brakes vs radio tunning Priorities may make part of scheduling algorithm e.g. resource access control algorithms 52 Jobs – Precedence Constraints Jobs may be constrained to execute in a particular order 53 Jobs – Precedence Constraints Jobs may be constrained to execute in a particular order This is known as a precedence constraint A job Ji is a predecessor of another job Jk and Jk a successor of Ji (denoted by Ji < Jk ) if Jk cannot begin execution until the execution of Ji completes Ji is an immediate predecessor of Jk if Ji < Jk and there is no other job Jj such that Ji < Jj < Jk Ji and Jk are independent when neither Ji < Jk nor Jk < Ji 53 Jobs – Precedence Constraints Jobs may be constrained to execute in a particular order This is known as a precedence constraint A job Ji is a predecessor of another job Jk and Jk a successor of Ji (denoted by Ji < Jk ) if Jk cannot begin execution until the execution of Ji completes Ji is an immediate predecessor of Jk if Ji < Jk and there is no other job Jj such that Ji < Jj < Jk Ji and Jk are independent when neither Ji < Jk nor Jk < Ji A job with a precedence constraint becomes ready for execution when its release time has passed and when all predecessors have completed. Example: authentication before retrieving an information, a signal processing job in radar surveillance system precedes a tracker job 53 Tasks – Modeling Reactive Systems Reactive systems – run for unlimited amount of time 54 Tasks – Modeling Reactive Systems Reactive systems – run for unlimited amount of time A system parameter: number of tasks may be known in advance (flight control) may change during computation (air traffic control) 54 Tasks – Modeling Reactive Systems Reactive systems – run for unlimited amount of time A system parameter: number of tasks may be known in advance (flight control) may change during computation (air traffic control) We consider three types of tasks Periodic – jobs executed at regular intervals, hard deadlines Aperiodic – jobs executed in random intervals, soft deadlines Sporadic – jobs executed in random intervals, hard deadlines ... precise definitions later. 54 Processors A processor, P, is an active component on which jobs are scheduled 55 Processors A processor, P, is an active component on which jobs are scheduled The general case considered in literature: m processors P1, . . . , Pm, each Pi has its type and speed. 55 Processors A processor, P, is an active component on which jobs are scheduled The general case considered in literature: m processors P1, . . . , Pm, each Pi has its type and speed. We mostly concentrate on single processor scheduling Efficient scheduling algorithms In a sense subsumes multiprocessor scheduling where tasks are assigned statically to individual processors i.e. all jobs of every task are assigned to a single processor 55 Processors A processor, P, is an active component on which jobs are scheduled The general case considered in literature: m processors P1, . . . , Pm, each Pi has its type and speed. We mostly concentrate on single processor scheduling Efficient scheduling algorithms In a sense subsumes multiprocessor scheduling where tasks are assigned statically to individual processors i.e. all jobs of every task are assigned to a single processor Multi-processor scheduling is a rich area of current research, we touch it only lightly (later). 55 Resources A resource, R, is a passive entity upon which jobs may depend 56 Resources A resource, R, is a passive entity upon which jobs may depend In general, we consider n resources R1, . . . , Rn of distinct types 56 Resources A resource, R, is a passive entity upon which jobs may depend In general, we consider n resources R1, . . . , Rn of distinct types Each Ri is used in a mutually exclusive manner A job that acquires a free resource locks the resource Jobs that need a busy resource have to wait until the resource is released Once released, the resource may be used by another job (i.e. it is not consumed) (More generally, each resource may be used by k jobs concurrently, i.e., there are k units of the resource) 56 Resources A resource, R, is a passive entity upon which jobs may depend In general, we consider n resources R1, . . . , Rn of distinct types Each Ri is used in a mutually exclusive manner A job that acquires a free resource locks the resource Jobs that need a busy resource have to wait until the resource is released Once released, the resource may be used by another job (i.e. it is not consumed) (More generally, each resource may be used by k jobs concurrently, i.e., there are k units of the resource) Resource requirements of a job specify which resources are used by the job the time interval(s) during which each resource is required (precise definitions later) 56 Scheduling Schedule assigns, in every time instant, processors and resources to jobs. More formally, a schedule is a function σ : {J1, . . .} × R+ 0 → P({P1, . . . , Pm, R1, . . . , Rn}) so that for every t ∈ R+ 0 there are rational 0 ≤ t1 ≤ t < t2 such that σ(Ji, ·) is constant on [t1, t2). (We also assume that there is the least time quantum in which scheduler does not change its decisions, i.e. each of the intervals [t1, t2) is larger than a fixed ε > 0.) 57 Valid and Feasible Schedule A schedule is valid if it satisfies the following conditions: Every processor is assigned to at most one job at any time Every job is assigned to at most one processor at any time No job is scheduled before its release time The total amount of processor time assigned to a given job is equal to its actual execution time All the precedence and resource usage constraints are satisfied 58 Valid and Feasible Schedule A schedule is valid if it satisfies the following conditions: Every processor is assigned to at most one job at any time Every job is assigned to at most one processor at any time No job is scheduled before its release time The total amount of processor time assigned to a given job is equal to its actual execution time All the precedence and resource usage constraints are satisfied A schedule is feasible if all jobs with hard real-time constraints complete before their deadlines 58 Valid and Feasible Schedule A schedule is valid if it satisfies the following conditions: Every processor is assigned to at most one job at any time Every job is assigned to at most one processor at any time No job is scheduled before its release time The total amount of processor time assigned to a given job is equal to its actual execution time All the precedence and resource usage constraints are satisfied A schedule is feasible if all jobs with hard real-time constraints complete before their deadlines A set of jobs is schedulable if there is a feasible schedule for the set. 58 Scheduling – Algorithms Scheduling algorithm computes a schedule for a set of jobs A set of jobs is schedulable according to a scheduling algorithm if the algorithm produces a feasible schedule 59 Scheduling – Algorithms Scheduling algorithm computes a schedule for a set of jobs A set of jobs is schedulable according to a scheduling algorithm if the algorithm produces a feasible schedule 59 Scheduling – Algorithms Scheduling algorithm computes a schedule for a set of jobs A set of jobs is schedulable according to a scheduling algorithm if the algorithm produces a feasible schedule Definition 7 A scheduling algorithm is optimal if it always produces a feasible schedule whenever such a schedule exists. 59 Real-Time Scheduling Individual Jobs 60 Scheduling of Individual Jobs We start with scheduling of finite sets of jobs {J1, . . . , Jm} for execution on single processor systems. 61 Scheduling of Individual Jobs We start with scheduling of finite sets of jobs {J1, . . . , Jm} for execution on single processor systems. Each Ji has a release time ri, an execution time ei and an absolute deadline di. We assume hard real-time constraints. The question: Is there an optimal scheduling algorithm? 61 Scheduling of Individual Jobs We start with scheduling of finite sets of jobs {J1, . . . , Jm} for execution on single processor systems. Each Ji has a release time ri, an execution time ei and an absolute deadline di. We assume hard real-time constraints. The question: Is there an optimal scheduling algorithm? We proceed in the direction of growing generality: 1. No resources, independent, synchronized (i.e. ri = 0 for all i) 2. No resources, independent but not synchronized 3. No resources but possibly dependent 4. The general case 61 No resources, Independent, Synchronized J1 J2 J3 J4 J5 ei 1 1 1 3 2 di 3 10 7 8 5 Is there a feasible schedule? 62 No resources, Independent, Synchronized J1 J2 J3 J4 J5 ei 1 1 1 3 2 di 3 10 7 8 5 Is there a feasible schedule? Note: Preemption does not help in synchronized case. 62 No resources, Independent, Synchronized J1 J2 J3 J4 J5 ei 1 1 1 3 2 di 3 10 7 8 5 Is there a feasible schedule? Note: Preemption does not help in synchronized case. Theorem 8 If there are no resource contentions, then executing independent jobs in the order of non-decreasing deadline (EDD) produces a feasible schedule (if it exists). 62 No resources, Independent, Synchronized J1 J2 J3 J4 J5 ei 1 1 1 3 2 di 3 10 7 8 5 Is there a feasible schedule? Note: Preemption does not help in synchronized case. Theorem 8 If there are no resource contentions, then executing independent jobs in the order of non-decreasing deadline (EDD) produces a feasible schedule (if it exists). Proof. Let σ be a schedule. Inversion is a pair (Ja, Jb) such that Ja precedes Jb in σ but db < da. Note that σ is EDD iff it does not contain any inversion. 62 Proof cont. Assume k > 0 inversions in σ. Let (Ja, Jb ) be an inversion such that Ja is scheduled right before Jb . There is always at least one such inversion (homework). Let ta < tb be the time instants when Ja, Jb start to be executed in σ. Recall: Ca, Cb are completion times of Ja, Jb , and ea, eb are execution times. Note that Ca ≤ da and that Cb ≤ db < da. 63 Proof cont. Assume k > 0 inversions in σ. Let (Ja, Jb ) be an inversion such that Ja is scheduled right before Jb . There is always at least one such inversion (homework). Let ta < tb be the time instants when Ja, Jb start to be executed in σ. Recall: Ca, Cb are completion times of Ja, Jb , and ea, eb are execution times. Note that Ca ≤ da and that Cb ≤ db < da. Define a new schedule σ in which: All jobs except Ja, Jb are scheduled as in σ, Jb starts at ta, Ja starts at ta + eb . Observe that σ is still feasible: Jb is completed at ta + eb < ta + eb + ea = tb + eb = Cb ≤ db Ja is completed at ta + eb + ea = Cb ≤ db < da Note that σ has k − 1 inversions. By repeating the above procedure k times, we obtain an EDD schedule. 63 No resources, Independent, Synchronized Is there any simple schedulability test? {J1, . . . , Jn} where d1 ≤ · · · ≤ dn is schedulable iff ∀i ∈ {1, . . . , n} : i k=1 ek ≤ di 64 No resources, Independent (No Synchro) J1 J2 J3 ri 0 0 2 ei 1 2 2 di 2 5 4 find a (feasible) schedule (with and without preemption) determine response time of each job in your schedule 65 No resources, Independent (No Synchro) J1 J2 J3 ri 0 0 2 ei 1 2 2 di 2 5 4 find a (feasible) schedule (with and without preemption) determine response time of each job in your schedule Preemption makes a difference. 65 No resources, Independent (No Synchro) Earliest Deadline First (EDF) scheduling: At any time instant, a job with the earliest absolute deadline is executed Here EDF works in the preemptive case but not in the non-preemptive one. J1 J2 ri 0 1 ei 4 2 di 7 5 66 No Resources, Independent (No Synchro) Theorem 9 If there are no resource contentions, jobs are independent and preemption is allowed, the EDF algorithm finds a feasible schedule (if it exists). Proof. We show that any feasible schedule σ can be transformed in finitely many steps to EDF schedule which is feasible. 67 No Resources, Independent (No Synchro) Theorem 9 If there are no resource contentions, jobs are independent and preemption is allowed, the EDF algorithm finds a feasible schedule (if it exists). Proof. We show that any feasible schedule σ can be transformed in finitely many steps to EDF schedule which is feasible. Let σ be a feasible schedule but not EDF. Assume, w.l.o.g., that for every k ∈ N at most one job is executed in the interval [k, k + 1) and that all release times and deadlines are in N. (Otherwise rescale by the least common multiple.) 67 No Resources, Independent (No Synchro) Proof cont. We say that σ violates EDF at k if one of the following conditions holds: 1. No job is executed in [k, k + 1) and there is a job Jb ready for execution in [k, k + 1) 2. There are two jobs Ja and Jb that satisfy: Ja and Jb are ready for execution at k Ja is executed in [k, k + 1) db < da 68 No Resources, Independent (No Synchro) Proof cont. We say that σ violates EDF at k if one of the following conditions holds: 1. No job is executed in [k, k + 1) and there is a job Jb ready for execution in [k, k + 1) 2. There are two jobs Ja and Jb that satisfy: Ja and Jb are ready for execution at k Ja is executed in [k, k + 1) db < da Let k ∈ N be the least time instant such that σ violates EDF at k. Assume, w.l.o.g. that Jb has the minimum deadline among all jobs ready for execution at k. 68 No Resources, Independent (No Synchro) Proof cont. Consider the above two cases separately: ad 1. Let us define a new schedule σ which is the same as σ except that Jb is executed in the empty interval [k, k + 1). ad 2. There is k < < db such that Jb is executed in [ , + 1). Let us define a new schedule σ which is the same as σ except: executes Jb in [k, k + 1) executes Ja in [ , + 1) In both cases the σ is feasible and does not violate EDF at any k ≤ k. Finitely many steps transform any feasible schedule to EDF. 69 No resources, Independent (No Synchro) The non-preemptive case is NP-hard. 70 No resources, Independent (No Synchro) The non-preemptive case is NP-hard. Heuristics are needed, such as the Spring algorithm, that usually work in much more general setting (with resources etc.) 70 No resources, Independent (No Synchro) The non-preemptive case is NP-hard. Heuristics are needed, such as the Spring algorithm, that usually work in much more general setting (with resources etc.) Use the notion of partial schedule where only a subset of jobs has been scheduled. Exhaustive search through partial schedules start with an empty schedule 70 No resources, Independent (No Synchro) The non-preemptive case is NP-hard. Heuristics are needed, such as the Spring algorithm, that usually work in much more general setting (with resources etc.) Use the notion of partial schedule where only a subset of jobs has been scheduled. Exhaustive search through partial schedules start with an empty schedule in every step either add a job which maximizes a heuristic function H among jobs that have not yet been tried in this partial schedule or backtrack if there is no such a job 70 No resources, Independent (No Synchro) The non-preemptive case is NP-hard. Heuristics are needed, such as the Spring algorithm, that usually work in much more general setting (with resources etc.) Use the notion of partial schedule where only a subset of jobs has been scheduled. Exhaustive search through partial schedules start with an empty schedule in every step either add a job which maximizes a heuristic function H among jobs that have not yet been tried in this partial schedule or backtrack if there is no such a job After failure, backtrack to previous partial schedule Heuristic function identifies plausible jobs to be scheduled (earliest release, earliest deadline, etc.) 70 No Resources, Dependent (No Synchro) Example: J1 J2 J3 J4 J5 J6 ei 1 1 1 1 1 1 di 2 5 4 3 5 6 Dependencies: J1 J2 J3 J4 J5 J6 Does EDF work? 71 No resources, Dependent (No Synchro) Theorem 10 Assume that there are no resource contentions and jobs are preemptable. There is a polynomial time algorithm which decides whether a feasible schedule exists and if yes, then computes one. Idea: Reduce to independent jobs by changing release times and deadlines. Then use EDF. 72 No resources, Dependent (No Synchro) Theorem 10 Assume that there are no resource contentions and jobs are preemptable. There is a polynomial time algorithm which decides whether a feasible schedule exists and if yes, then computes one. Idea: Reduce to independent jobs by changing release times and deadlines. Then use EDF. Observe that if Ji < Jk then replacing rk with max{rk , ri + ei} (Jk cannot be scheduled for execution before ri + ei because Ji cannot be finished before ri + ei) di with min{di, dk − ek } (Ji must be finished before dk − ek so that Jk can be finished before dk ) does not change feasibility. Replace systematically according to the precedence relation. 72 No Resources, Dependent (No Synchro) Define r∗ k , d∗ k systematically as follows: Pick Jk whose all predecessors have been processed and compute r∗ k := max{rk , maxJi UALG, then there is T with UT ≤ U that is not schedulable by ALG. 97 Utilization – Example T1 = (2, 1) then u1 = 1 2 98 Utilization – Example T1 = (2, 1) then u1 = 1 2 T1 = (11, 5, 2, 4) then u1 = 2 5 (i.e., the phase and deadline do not play any role) 98 Utilization – Example T1 = (2, 1) then u1 = 1 2 T1 = (11, 5, 2, 4) then u1 = 2 5 (i.e., the phase and deadline do not play any role) T = {T1, T2, T3} where T1 = (2, 1), T2 = (6, 1), T3 = (8, 3) then UT = 1 2 + 1 6 + 3 8 = 25 24 98 Real-Time Scheduling Priority-Driven Scheduling Dynamic-Priority 99 Optimality of EDF Theorem 13 Let T = {T1, . . . , Tn} be a set of independent, preemptable periodic tasks with Di ≥ pi for i = 1, . . . , n. The following statements are equivalent: 1. T can be feasibly scheduled on one processor 2. UT ≤ 1 3. T is schedulable using EDF (i.e., in particular, UEDF = 1) Proof. 1.⇒2. We prove that UT > 1 implies that T is not schedulable 2.⇒3. We prove that if EDF fails to feasibly schedule, then UT > 1 3.⇒1. Trivial 100 Proof of 1.⇒2. Assume that UT = N i=1 ei pi > 1. 101 Proof of 1.⇒2. Assume that UT = N i=1 ei pi > 1. Consider a time instant t > maxi ϕi (i.e. a time when all tasks are already "running") 101 Proof of 1.⇒2. Assume that UT = N i=1 ei pi > 1. Consider a time instant t > maxi ϕi (i.e. a time when all tasks are already "running") Observe that the number of jobs of Ti that are released in the time interval [0, t) is t−ϕi pi . Thus a single processor needs n i=1 t−ϕi pi · ei time units to finish all jobs released before t. 101 Proof of 1.⇒2. Assume that UT = N i=1 ei pi > 1. Consider a time instant t > maxi ϕi (i.e. a time when all tasks are already "running") Observe that the number of jobs of Ti that are released in the time interval [0, t) is t−ϕi pi . Thus a single processor needs n i=1 t−ϕi pi · ei time units to finish all jobs released before t. However, the the total time to finish all jobs released before t is n i=1 t − ϕi pi ·ei ≥ n i=1 (t−ϕi)· ei pi = n i=1 tui−ϕiui = n i=1 tui− n i=1 ϕiui = t·UT − n i=1 ϕiui Here n i=1 ϕiui does not depend on t. 101 Proof of 1.⇒2. Assume that UT = N i=1 ei pi > 1. Consider a time instant t > maxi ϕi (i.e. a time when all tasks are already "running") Observe that the number of jobs of Ti that are released in the time interval [0, t) is t−ϕi pi . Thus a single processor needs n i=1 t−ϕi pi · ei time units to finish all jobs released before t. However, the the total time to finish all jobs released before t is n i=1 t − ϕi pi ·ei ≥ n i=1 (t−ϕi)· ei pi = n i=1 tui−ϕiui = n i=1 tui− n i=1 ϕiui = t·UT − n i=1 ϕiui Here n i=1 ϕiui does not depend on t. Note that limt→∞ t · UT − n i=1 ϕiui − t = ∞. So there exists t such that t · UT − n i=1 ϕiui > t + maxi Di. 101 Proof of 1.⇒2. Assume that UT = N i=1 ei pi > 1. Consider a time instant t > maxi ϕi (i.e. a time when all tasks are already "running") Observe that the number of jobs of Ti that are released in the time interval [0, t) is t−ϕi pi . Thus a single processor needs n i=1 t−ϕi pi · ei time units to finish all jobs released before t. However, the the total time to finish all jobs released before t is n i=1 t − ϕi pi ·ei ≥ n i=1 (t−ϕi)· ei pi = n i=1 tui−ϕiui = n i=1 tui− n i=1 ϕiui = t·UT − n i=1 ϕiui Here n i=1 ϕiui does not depend on t. Note that limt→∞ t · UT − n i=1 ϕiui − t = ∞. So there exists t such that t · UT − n i=1 ϕiui > t + maxi Di. So in order to complete all jobs released before t we need more time than t + maxi Di. However, the latest deadline of a job released before t is t + maxi Di. So at least one job misses its deadline. 101 Proof of 2.⇒3. – Simplified Let us start with a proof of a special case (see the assumptions A1 and A2 below). Then a complete proof will be presented. We prove ¬3.⇒ ¬2. assuming that Di = pi for i = 1, . . . , n. (Note that the general case immediately follows.) 102 Proof of 2.⇒3. – Simplified Let us start with a proof of a special case (see the assumptions A1 and A2 below). Then a complete proof will be presented. We prove ¬3.⇒ ¬2. assuming that Di = pi for i = 1, . . . , n. (Note that the general case immediately follows.) Assume that T is not schedulable according to EDF. (Our goal is to show that UT > 1.) 102 Proof of 2.⇒3. – Simplified Let us start with a proof of a special case (see the assumptions A1 and A2 below). Then a complete proof will be presented. We prove ¬3.⇒ ¬2. assuming that Di = pi for i = 1, . . . , n. (Note that the general case immediately follows.) Assume that T is not schedulable according to EDF. (Our goal is to show that UT > 1.) This means that there must be at least one job that misses its deadline when EDF is used. Simplifying assumptions: A1 Suppose that all tasks are in phase, i.e. the phase ϕ = 0 for every task T . A2 Suppose that the first job Ji,1 of a task Ti misses its deadline. By A1, Ji,1 is released at 0 and misses its deadline at pi. Assume w.l.o.g. that this is the first time when a job misses its deadline. (To simplify even further, you may (privately) assume that no other job has its deadline at pi.) 102 Proof of 2.⇒3. – Simplified Let G be the set of all jobs released in [0, pi] with deadlines in [0, pi]. 103 Proof of 2.⇒3. – Simplified Let G be the set of all jobs released in [0, pi] with deadlines in [0, pi]. Crucial observations: G contains Ji,1 103 Proof of 2.⇒3. – Simplified Let G be the set of all jobs released in [0, pi] with deadlines in [0, pi]. Crucial observations: G contains Ji,1 Only jobs of G can be executed in [0, pi] Jobs that do not belong to G cannot be executed in [0, pi] as Ji,1 is not completed in [0, pi] and only jobs of G can preempt Ji,1. 103 Proof of 2.⇒3. – Simplified Let G be the set of all jobs released in [0, pi] with deadlines in [0, pi]. Crucial observations: G contains Ji,1 Only jobs of G can be executed in [0, pi] Jobs that do not belong to G cannot be executed in [0, pi] as Ji,1 is not completed in [0, pi] and only jobs of G can preempt Ji,1. The processor is never idle in [0, pi] The processor is not idle because Ji,1 is ready for computation throughout [0, pi]. 103 Proof of 2.⇒3. – Simplified Let G be the set of all jobs released in [0, pi] with deadlines in [0, pi]. Crucial observations: G contains Ji,1 Only jobs of G can be executed in [0, pi] Jobs that do not belong to G cannot be executed in [0, pi] as Ji,1 is not completed in [0, pi] and only jobs of G can preempt Ji,1. The processor is never idle in [0, pi] The processor is not idle because Ji,1 is ready for computation throughout [0, pi]. Denote by EG the total execution time of G, that is, the sum of execution times of all jobs in G. Corollary of the crucial observation: EG > pi because otherwise Ji,1 (and all jobs that could possibly preempt it) would be completed by pi. Let us compute EG. 103 Proof of 2.⇒3. – Simplified Since we assume ϕ = 0 for every T , the first job of T is released at 0, and thus pi p jobs of T belong to G. E.g., if p = 2 and pi = 5 then three jobs of T are released in [0, 5] (at times 0, 2, 4) but only 2 = 5 2 = pi p of them have their deadlines in [0, pi]. 104 Proof of 2.⇒3. – Simplified Since we assume ϕ = 0 for every T , the first job of T is released at 0, and thus pi p jobs of T belong to G. E.g., if p = 2 and pi = 5 then three jobs of T are released in [0, 5] (at times 0, 2, 4) but only 2 = 5 2 = pi p of them have their deadlines in [0, pi]. Thus the total execution time EG of all jobs in G is EG = n =1 pi p e 104 Proof of 2.⇒3. – Simplified Since we assume ϕ = 0 for every T , the first job of T is released at 0, and thus pi p jobs of T belong to G. E.g., if p = 2 and pi = 5 then three jobs of T are released in [0, 5] (at times 0, 2, 4) but only 2 = 5 2 = pi p of them have their deadlines in [0, pi]. Thus the total execution time EG of all jobs in G is EG = n =1 pi p e But then pi < EG = n =1 pi p e ≤ n =1 pi p e ≤ pi n =1 u ≤ pi · UT which implies that UT > 1. 104 Proof of 2.⇒3. – Complete Now let us drop the simplifying assumptions A1 and A2 ! We prove ¬3.⇒ ¬2. assuming that Di = pi for i = 1, . . . , n (note that the general case immediately follows) 105 Proof of 2.⇒3. – Complete Now let us drop the simplifying assumptions A1 and A2 ! We prove ¬3.⇒ ¬2. assuming that Di = pi for i = 1, . . . , n (note that the general case immediately follows) Assume that T is not schedulable by EDF. (We show that UT > 1) 105 Proof of 2.⇒3. – Complete Now let us drop the simplifying assumptions A1 and A2 ! We prove ¬3.⇒ ¬2. assuming that Di = pi for i = 1, . . . , n (note that the general case immediately follows) Assume that T is not schedulable by EDF. (We show that UT > 1) Suppose that a job Ji,k of Ti misses its deadline at time t = ri,k + pi. Assume that this is the earliest deadline miss. 105 Proof of 2.⇒3. – Complete Now let us drop the simplifying assumptions A1 and A2 ! We prove ¬3.⇒ ¬2. assuming that Di = pi for i = 1, . . . , n (note that the general case immediately follows) Assume that T is not schedulable by EDF. (We show that UT > 1) Suppose that a job Ji,k of Ti misses its deadline at time t = ri,k + pi. Assume that this is the earliest deadline miss. Let t− be the end of the last interval before t in which either jobs with deadlines after t are being executed, or the processor is idle. 105 Proof of 2.⇒3. – Complete Now let us drop the simplifying assumptions A1 and A2 ! We prove ¬3.⇒ ¬2. assuming that Di = pi for i = 1, . . . , n (note that the general case immediately follows) Assume that T is not schedulable by EDF. (We show that UT > 1) Suppose that a job Ji,k of Ti misses its deadline at time t = ri,k + pi. Assume that this is the earliest deadline miss. Let t− be the end of the last interval before t in which either jobs with deadlines after t are being executed, or the processor is idle. Let G be the set of all jobs released in [t−, t] with deadlines in [t−, t]. 105 Proof of 2.⇒3. – Complete (cont.) G contains Ji,k Note that t− ≤ ri,k because otherwise either Ji,k or another job with a deadline at, or before t would be executed just before t−. 106 Proof of 2.⇒3. – Complete (cont.) G contains Ji,k Note that t− ≤ ri,k because otherwise either Ji,k or another job with a deadline at, or before t would be executed just before t−. Only jobs of G can be executed in [t−, t] Indeed, by definition of t−: All jobs (possibly) executed in [t−, t] must have their deadlines at, or before t by the definition of t−. 106 Proof of 2.⇒3. – Complete (cont.) G contains Ji,k Note that t− ≤ ri,k because otherwise either Ji,k or another job with a deadline at, or before t would be executed just before t−. Only jobs of G can be executed in [t−, t] Indeed, by definition of t−: All jobs (possibly) executed in [t−, t] must have their deadlines at, or before t by the definition of t−. If an idle interval precedes t−, then all jobs with deadlines at, or before t must be released at, or after t− because otherwise one of them would have been executed just before t−. 106 Proof of 2.⇒3. – Complete (cont.) G contains Ji,k Note that t− ≤ ri,k because otherwise either Ji,k or another job with a deadline at, or before t would be executed just before t−. Only jobs of G can be executed in [t−, t] Indeed, by definition of t−: All jobs (possibly) executed in [t−, t] must have their deadlines at, or before t by the definition of t−. If an idle interval precedes t−, then all jobs with deadlines at, or before t must be released at, or after t− because otherwise one of them would have been executed just before t−. If a job with its deadline after t is executed just before t−, then all jobs with deadlines at, or before t must be released in [t−, t] because otherwise one of them would have been executed just before t−. 106 Proof of 2.⇒3. – Complete (cont.) G contains Ji,k Note that t− ≤ ri,k because otherwise either Ji,k or another job with a deadline at, or before t would be executed just before t−. Only jobs of G can be executed in [t−, t] Indeed, by definition of t−: All jobs (possibly) executed in [t−, t] must have their deadlines at, or before t by the definition of t−. If an idle interval precedes t−, then all jobs with deadlines at, or before t must be released at, or after t− because otherwise one of them would have been executed just before t−. If a job with its deadline after t is executed just before t−, then all jobs with deadlines at, or before t must be released in [t−, t] because otherwise one of them would have been executed just before t−. The processor is never idle in [t−, t] by definition of t− Denote by EG the sum of all execution times of all jobs in G. 106 Proof of 2.⇒3. – Complete (cont.) Now EG > t − t− because otherwise Ji,k would complete in [t−, t]. How to compute EG? 107 Proof of 2.⇒3. – Complete (cont.) Now EG > t − t− because otherwise Ji,k would complete in [t−, t]. How to compute EG? For a task T , denote by R the earliest release time of a job in T in the interval [t−, t]. 107 Proof of 2.⇒3. – Complete (cont.) Now EG > t − t− because otherwise Ji,k would complete in [t−, t]. How to compute EG? For a task T , denote by R the earliest release time of a job in T in the interval [t−, t]. For every T , exactly t−R p jobs of T belong to G. 107 Proof of 2.⇒3. – Complete (cont.) Now EG > t − t− because otherwise Ji,k would complete in [t−, t]. How to compute EG? For a task T , denote by R the earliest release time of a job in T in the interval [t−, t]. For every T , exactly t−R p jobs of T belong to G. Thus EG = n =1 t − R p e 107 Proof of 2.⇒3. – Complete (cont.) Now EG > t − t− because otherwise Ji,k would complete in [t−, t]. How to compute EG? For a task T , denote by R the earliest release time of a job in T in the interval [t−, t]. For every T , exactly t−R p jobs of T belong to G. Thus EG = n =1 t − R p e As argued above: t−t− < EG = n =1 t − R p e ≤ n =1 t − t− p e ≤ (t−t−) n =1 u ≤ (t−t−)UT which implies that UT > 1. 107 Density and EDF What about tasks with Di < pi ? 108 Density and EDF What about tasks with Di < pi ? Density of a task Ti with period pi, execution time ei and relative deadline Di is defined by ei/ min(Di, pi) Total density ∆T of a set of tasks T is the sum of densities of tasks in T Note that if Di < pi for some i, then ∆T > UT 108 Density and EDF What about tasks with Di < pi ? Density of a task Ti with period pi, execution time ei and relative deadline Di is defined by ei/ min(Di, pi) Total density ∆T of a set of tasks T is the sum of densities of tasks in T Note that if Di < pi for some i, then ∆T > UT Theorem 14 A set T of independent, preemptable, periodic tasks can be feasibly scheduled on one processor if ∆T ≤ 1. Note that this is NOT a necessary condition! 108 Schedulability Test For EDF The problem: Given a set of independent, preemptable, periodic tasks T = {T1, . . . , Tn} where each Ti has a period pi, execution time ei, and relative deadline Di, decide whether T is schedulable by EDF. 109 Schedulability Test For EDF The problem: Given a set of independent, preemptable, periodic tasks T = {T1, . . . , Tn} where each Ti has a period pi, execution time ei, and relative deadline Di, decide whether T is schedulable by EDF. Solution using utilization and density: If pi ≤ Di for each i, then it suffices to decide whether UT ≤ 1. Otherwise, decide whether ∆T ≤ 1: If yes, then T is schedulable with EDF If not, then T does not have to be schedulable 109 Schedulability Test For EDF The problem: Given a set of independent, preemptable, periodic tasks T = {T1, . . . , Tn} where each Ti has a period pi, execution time ei, and relative deadline Di, decide whether T is schedulable by EDF. Solution using utilization and density: If pi ≤ Di for each i, then it suffices to decide whether UT ≤ 1. Otherwise, decide whether ∆T ≤ 1: If yes, then T is schedulable with EDF If not, then T does not have to be schedulable Note that Phases of tasks do not have to be specified Parameters may vary: increasing periods or deadlines, or decreasing execution times does not prevent schedulability 109 Schedulability Test for EDF – Example Consider a digital robot controller A control-law computation takes no more than 8 ms the sampling rate: 100 Hz, i.e. computes every 10 ms 110 Schedulability Test for EDF – Example Consider a digital robot controller A control-law computation takes no more than 8 ms the sampling rate: 100 Hz, i.e. computes every 10 ms Feasible? Trivially yes .... Add Built-In Self-Test (BIST) maximum execution time 50 ms want a minimal period that is feasible (max one second) 110 Schedulability Test for EDF – Example Consider a digital robot controller A control-law computation takes no more than 8 ms the sampling rate: 100 Hz, i.e. computes every 10 ms Feasible? Trivially yes .... Add Built-In Self-Test (BIST) maximum execution time 50 ms want a minimal period that is feasible (max one second) With 250 ms still feasible .... Add a telemetry task maximum execution time 15 ms want to minimize the deadline on telemetry period may be large 110 Schedulability Test for EDF – Example Consider a digital robot controller A control-law computation takes no more than 8 ms the sampling rate: 100 Hz, i.e. computes every 10 ms Feasible? Trivially yes .... Add Built-In Self-Test (BIST) maximum execution time 50 ms want a minimal period that is feasible (max one second) With 250 ms still feasible .... Add a telemetry task maximum execution time 15 ms want to minimize the deadline on telemetry period may be large Reducing BIST to once a second, deadline on telemetry may be set to 100 ms .... 110 Real-Time Scheduling Priority-Driven Scheduling Fixed-Priority 111 Fixed-Priority Algorithms Recall that we consider a set of n tasks T = {T1, . . . , Tn} 112 Fixed-Priority Algorithms Recall that we consider a set of n tasks T = {T1, . . . , Tn} Any fixed-priority algorithm schedules tasks of T according to fixed (distinct) priorities assigned to tasks. 112 Fixed-Priority Algorithms Recall that we consider a set of n tasks T = {T1, . . . , Tn} Any fixed-priority algorithm schedules tasks of T according to fixed (distinct) priorities assigned to tasks. We write Ti Tj whenever Ti has a higher priority than Tj. 112 Fixed-Priority Algorithms Recall that we consider a set of n tasks T = {T1, . . . , Tn} Any fixed-priority algorithm schedules tasks of T according to fixed (distinct) priorities assigned to tasks. We write Ti Tj whenever Ti has a higher priority than Tj. To simplify our reasoning, assume that all tasks are in phase, i.e. ϕk = 0 for all Tk . We will remove this assumption at the end. 112 Fixed-Priority Algorithms – Reminder Recall that Fixed-Priority Algorithms do not have to be optimal. Consider T = {T1, T2} where T1 = (4, 2) and T2 = (6, 3) UT = 1 and thus T is schedulable by EDF If T1 T2, then J2,1 misses its deadline If T2 T1, then J1,1 misses its deadline 113 Fixed-Priority Algorithms – Reminder Recall that Fixed-Priority Algorithms do not have to be optimal. Consider T = {T1, T2} where T1 = (4, 2) and T2 = (6, 3) UT = 1 and thus T is schedulable by EDF If T1 T2, then J2,1 misses its deadline If T2 T1, then J1,1 misses its deadline We consider the following algorithms: RM = assigns priorities to tasks based on their periods the priority is inversely proportional to the period pi DM = assigns priorities to tasks based on their relative deadlines the priority is inversely proportional to the relative deadline Di (In all cases, ties are broken arbitrarily.) 113 Fixed-Priority Algorithms – Reminder Recall that Fixed-Priority Algorithms do not have to be optimal. Consider T = {T1, T2} where T1 = (4, 2) and T2 = (6, 3) UT = 1 and thus T is schedulable by EDF If T1 T2, then J2,1 misses its deadline If T2 T1, then J1,1 misses its deadline We consider the following algorithms: RM = assigns priorities to tasks based on their periods the priority is inversely proportional to the period pi DM = assigns priorities to tasks based on their relative deadlines the priority is inversely proportional to the relative deadline Di (In all cases, ties are broken arbitrarily.) We consider the following questions: Are the algorithms optimal? How to efficiently (or even online) test for schedulability? 113 Maximum Response Time Assume that Di ≤ pi for every i ∈ {1, . . . , n}. Which job of a task Ti has the maximum response time? 114 Maximum Response Time Assume that Di ≤ pi for every i ∈ {1, . . . , n}. Which job of a task Ti has the maximum response time? As all tasks are in phase, the first job of Ti is released together with (first) jobs of all tasks that have higher priority than Ti. 114 Maximum Response Time Assume that Di ≤ pi for every i ∈ {1, . . . , n}. Which job of a task Ti has the maximum response time? As all tasks are in phase, the first job of Ti is released together with (first) jobs of all tasks that have higher priority than Ti. This means, that Ji,1 is the most preempted of jobs in Ti. 114 Maximum Response Time Assume that Di ≤ pi for every i ∈ {1, . . . , n}. Which job of a task Ti has the maximum response time? As all tasks are in phase, the first job of Ti is released together with (first) jobs of all tasks that have higher priority than Ti. This means, that Ji,1 is the most preempted of jobs in Ti. It follows, that Ji,1 has the maximum response time. Note that this relies heavily on the assumption that tasks are in phase! 114 Maximum Response Time Assume that Di ≤ pi for every i ∈ {1, . . . , n}. Which job of a task Ti has the maximum response time? As all tasks are in phase, the first job of Ti is released together with (first) jobs of all tasks that have higher priority than Ti. This means, that Ji,1 is the most preempted of jobs in Ti. It follows, that Ji,1 has the maximum response time. Note that this relies heavily on the assumption that tasks are in phase! Thus in order to decide whether T is schedulable, it suffices to test for schedulability of the first jobs of all tasks. 114 Optimality of RM for Simply Periodic Tasks Definition 15 A set {T1, . . . , Tn} is simply periodic if for every pair Ti, T satisfying pi > p we have that pi is an integer multiple of p Example 16 The helicopter control system from the first lecture. 115 Optimality of RM for Simply Periodic Tasks Definition 15 A set {T1, . . . , Tn} is simply periodic if for every pair Ti, T satisfying pi > p we have that pi is an integer multiple of p Example 16 The helicopter control system from the first lecture. Theorem 17 A set T of n simply periodic, independent, preemptable tasks with Di = pi is schedulable on one processor according to RM iff UT ≤ 1. i.e. on simply periodic tasks RM is as good as EDF Note: Theorem 17 is true in general, no "in phase" assumption is needed. 115 Proof of Theorem 17 By Theorem 13, every schedulable set T satisfies UT ≤ 1. 116 Proof of Theorem 17 By Theorem 13, every schedulable set T satisfies UT ≤ 1. We prove that if T is not schedulable according to RM, then UT > 1. 116 Proof of Theorem 17 By Theorem 13, every schedulable set T satisfies UT ≤ 1. We prove that if T is not schedulable according to RM, then UT > 1. Assume that a job Ji,1 of Ti misses its deadline at Di = pi. W.l.o.g., we assume that T1 · · · Tn according to RM. 116 Proof of Theorem 17 By Theorem 13, every schedulable set T satisfies UT ≤ 1. We prove that if T is not schedulable according to RM, then UT > 1. Assume that a job Ji,1 of Ti misses its deadline at Di = pi. W.l.o.g., we assume that T1 · · · Tn according to RM. Let us compute the total execution time of Ji,1 and all jobs that possibly preempt it: E = ei + i−1 =1 pi p e = i =1 pi p e = pi i =1 u ≤ pi n =1 u = piUT 116 Proof of Theorem 17 By Theorem 13, every schedulable set T satisfies UT ≤ 1. We prove that if T is not schedulable according to RM, then UT > 1. Assume that a job Ji,1 of Ti misses its deadline at Di = pi. W.l.o.g., we assume that T1 · · · Tn according to RM. Let us compute the total execution time of Ji,1 and all jobs that possibly preempt it: E = ei + i−1 =1 pi p e = i =1 pi p e = pi i =1 u ≤ pi n =1 u = piUT Now E > pi because otherwise Ji,1 meets its deadline. 116 Proof of Theorem 17 By Theorem 13, every schedulable set T satisfies UT ≤ 1. We prove that if T is not schedulable according to RM, then UT > 1. Assume that a job Ji,1 of Ti misses its deadline at Di = pi. W.l.o.g., we assume that T1 · · · Tn according to RM. Let us compute the total execution time of Ji,1 and all jobs that possibly preempt it: E = ei + i−1 =1 pi p e = i =1 pi p e = pi i =1 u ≤ pi n =1 u = piUT Now E > pi because otherwise Ji,1 meets its deadline. Thus pi < E ≤ piUT and we obtain UT > 1. 116 Optimality of DM (RM) among Fixed-Priority Algs. Theorem 18 A set of independent, preemptable periodic tasks with Di ≤ pi that are in phase (i.e., ϕi = 0 for all i = 1, . . . , n) can be feasibly scheduled on one processor according to DM if it can be feasibly scheduled by some fixed-priority algorithm. 117 Optimality of DM (RM) among Fixed-Priority Algs. Theorem 18 A set of independent, preemptable periodic tasks with Di ≤ pi that are in phase (i.e., ϕi = 0 for all i = 1, . . . , n) can be feasibly scheduled on one processor according to DM if it can be feasibly scheduled by some fixed-priority algorithm. Proof. Assume a fixed-priority feasible schedule with T1 · · · Tn. 117 Optimality of DM (RM) among Fixed-Priority Algs. Theorem 18 A set of independent, preemptable periodic tasks with Di ≤ pi that are in phase (i.e., ϕi = 0 for all i = 1, . . . , n) can be feasibly scheduled on one processor according to DM if it can be feasibly scheduled by some fixed-priority algorithm. Proof. Assume a fixed-priority feasible schedule with T1 · · · Tn. Consider the least i such that the relative deadline Di of Ti is larger than the relative deadline Di+1 of Ti+1. 117 Optimality of DM (RM) among Fixed-Priority Algs. Theorem 18 A set of independent, preemptable periodic tasks with Di ≤ pi that are in phase (i.e., ϕi = 0 for all i = 1, . . . , n) can be feasibly scheduled on one processor according to DM if it can be feasibly scheduled by some fixed-priority algorithm. Proof. Assume a fixed-priority feasible schedule with T1 · · · Tn. Consider the least i such that the relative deadline Di of Ti is larger than the relative deadline Di+1 of Ti+1. Swap the priorities of Ti and Ti+1. 117 Optimality of DM (RM) among Fixed-Priority Algs. Theorem 18 A set of independent, preemptable periodic tasks with Di ≤ pi that are in phase (i.e., ϕi = 0 for all i = 1, . . . , n) can be feasibly scheduled on one processor according to DM if it can be feasibly scheduled by some fixed-priority algorithm. Proof. Assume a fixed-priority feasible schedule with T1 · · · Tn. Consider the least i such that the relative deadline Di of Ti is larger than the relative deadline Di+1 of Ti+1. Swap the priorities of Ti and Ti+1. The resulting schedule is still feasible. 117 Optimality of DM (RM) among Fixed-Priority Algs. Theorem 18 A set of independent, preemptable periodic tasks with Di ≤ pi that are in phase (i.e., ϕi = 0 for all i = 1, . . . , n) can be feasibly scheduled on one processor according to DM if it can be feasibly scheduled by some fixed-priority algorithm. Proof. Assume a fixed-priority feasible schedule with T1 · · · Tn. Consider the least i such that the relative deadline Di of Ti is larger than the relative deadline Di+1 of Ti+1. Swap the priorities of Ti and Ti+1. The resulting schedule is still feasible. DM is obtained by using finitely many swaps. Note: If the assumptions of the above theorem hold and all relative deadlines are equal to periods, then RM is optimal among all fixed-priority algorithms. 117 Fixed-Priority Algorithms: Schedulability We consider two schedulability tests: Schedulable utilization URM of the RM algorithm. Time-demand analysis based on response times. 118 Schedulable Utilization for RM Theorem 19 Let us fix n ∈ N and consider only independent, preemptable periodic tasks with Di = pi. 119 Schedulable Utilization for RM Theorem 19 Let us fix n ∈ N and consider only independent, preemptable periodic tasks with Di = pi. If T is a set of n tasks satisfying UT ≤ n(21/n − 1), then UT is schedulable according to the RM algorithm. 119 Schedulable Utilization for RM Theorem 19 Let us fix n ∈ N and consider only independent, preemptable periodic tasks with Di = pi. If T is a set of n tasks satisfying UT ≤ n(21/n − 1), then UT is schedulable according to the RM algorithm. For every U > n(21/n − 1) there is a set T of n tasks satisfying UT ≤ U that is not schedulable by RM. Note: Theorem 19 holds in general, no "in phase" assumption is needed. 119 Schedulable Utilization for RM It follows that the maximum schedulable utilization URM over independent, preemptable periodic tasks satisfies URM = inf n n(21/n − 1) = lim n→∞ n(21/n − 1) = ln 2 ≈ 0.693 Note that UT ≤ n(21/n − 1) is a sufficient but not necessary condition for schedulability of T using the RM algorithm (an example will be given later) 120 Schedulable Utilization for RM It follows that the maximum schedulable utilization URM over independent, preemptable periodic tasks satisfies URM = inf n n(21/n − 1) = lim n→∞ n(21/n − 1) = ln 2 ≈ 0.693 Note that UT ≤ n(21/n − 1) is a sufficient but not necessary condition for schedulability of T using the RM algorithm (an example will be given later) We say that a set of tasks T is RM-schedulable if it is schedulable according to RM. We say that T is RM-infeasible if it is not RM-schedulable. 120 Proof – Special Case To simplify, we restrict to two tasks and always assume p1 ≤ p2 ≤ 2p1. (the latter condition is w.l.o.g., proof omitted) 121 Proof – Special Case To simplify, we restrict to two tasks and always assume p1 ≤ p2 ≤ 2p1. (the latter condition is w.l.o.g., proof omitted) Outline: Given p1, p2, e1, denote by max_e2 the maximum execution time so that T = {(p1, e1), (p2, max_e2)} is RM-schedulable. 121 Proof – Special Case To simplify, we restrict to two tasks and always assume p1 ≤ p2 ≤ 2p1. (the latter condition is w.l.o.g., proof omitted) Outline: Given p1, p2, e1, denote by max_e2 the maximum execution time so that T = {(p1, e1), (p2, max_e2)} is RM-schedulable. We define Up1,p2 e1 to be UT where T = {(p1, e1), (p2, max_e2)}. We say that T fully utilizes the processor, any increase in an execution time causes RM-infeasibility. 121 Proof – Special Case To simplify, we restrict to two tasks and always assume p1 ≤ p2 ≤ 2p1. (the latter condition is w.l.o.g., proof omitted) Outline: Given p1, p2, e1, denote by max_e2 the maximum execution time so that T = {(p1, e1), (p2, max_e2)} is RM-schedulable. We define Up1,p2 e1 to be UT where T = {(p1, e1), (p2, max_e2)}. We say that T fully utilizes the processor, any increase in an execution time causes RM-infeasibility. Now we find the (global) minimum minU of Up1,p2 e1 w.r.t. all parameters p1, p2, e1. 121 Proof – Special Case To simplify, we restrict to two tasks and always assume p1 ≤ p2 ≤ 2p1. (the latter condition is w.l.o.g., proof omitted) Outline: Given p1, p2, e1, denote by max_e2 the maximum execution time so that T = {(p1, e1), (p2, max_e2)} is RM-schedulable. We define Up1,p2 e1 to be UT where T = {(p1, e1), (p2, max_e2)}. We say that T fully utilizes the processor, any increase in an execution time causes RM-infeasibility. Now we find the (global) minimum minU of Up1,p2 e1 w.r.t. all parameters p1, p2, e1. Note that this suffices to obtain the desired result: Given a set of tasks T = {(p1, e1), (p2, e2)} satisfying UT ≤ minU we get UT ≤ minU ≤ Up1,p2 e1 , and thus the execution time e2 cannot be larger than max_e2. Thus, T is RM-schedulable. 121 Proof – Special Case To simplify, we restrict to two tasks and always assume p1 ≤ p2 ≤ 2p1. (the latter condition is w.l.o.g., proof omitted) Outline: Given p1, p2, e1, denote by max_e2 the maximum execution time so that T = {(p1, e1), (p2, max_e2)} is RM-schedulable. We define Up1,p2 e1 to be UT where T = {(p1, e1), (p2, max_e2)}. We say that T fully utilizes the processor, any increase in an execution time causes RM-infeasibility. Now we find the (global) minimum minU of Up1,p2 e1 w.r.t. all parameters p1, p2, e1. Note that this suffices to obtain the desired result: Given a set of tasks T = {(p1, e1), (p2, e2)} satisfying UT ≤ minU we get UT ≤ minU ≤ Up1,p2 e1 , and thus the execution time e2 cannot be larger than max_e2. Thus, T is RM-schedulable. Given U > minU, there must be p1, p2, e1 satisfying minU ≤ Up1,p2 e1 < U where Up1,p2 e1 = UT for a set of tasks T = {(p1, e1), (p2, max_e2)}. 121 Proof – Special Case To simplify, we restrict to two tasks and always assume p1 ≤ p2 ≤ 2p1. (the latter condition is w.l.o.g., proof omitted) Outline: Given p1, p2, e1, denote by max_e2 the maximum execution time so that T = {(p1, e1), (p2, max_e2)} is RM-schedulable. We define Up1,p2 e1 to be UT where T = {(p1, e1), (p2, max_e2)}. We say that T fully utilizes the processor, any increase in an execution time causes RM-infeasibility. Now we find the (global) minimum minU of Up1,p2 e1 w.r.t. all parameters p1, p2, e1. Note that this suffices to obtain the desired result: Given a set of tasks T = {(p1, e1), (p2, e2)} satisfying UT ≤ minU we get UT ≤ minU ≤ Up1,p2 e1 , and thus the execution time e2 cannot be larger than max_e2. Thus, T is RM-schedulable. Given U > minU, there must be p1, p2, e1 satisfying minU ≤ Up1,p2 e1 < U where Up1,p2 e1 = UT for a set of tasks T = {(p1, e1), (p2, max_e2)}. However, now increasing e1 by a sufficiently small ε > 0 makes the set RM-infeasible without making utilization larger than U. 121 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p2 − 2e1. 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p2 − 2e1. Which gives the utilization Up1,p2 e1 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p2 − 2e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p2 − 2e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p2 − 2e1 p2 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p2 − 2e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p2 − 2e1 p2 = e1 p1 + p2 p2 − 2e1 p2 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p2 − 2e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p2 − 2e1 p2 = e1 p1 + p2 p2 − 2e1 p2 = 1 + e1 p2 p2 p1 − 2 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p2 − 2e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p2 − 2e1 p2 = e1 p1 + p2 p2 − 2e1 p2 = 1 + e1 p2 p2 p1 − 2 As p2 p1 − 2 ≤ 0, the utilization Up1,p2 e1 is minimized by maximizing e1. 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p2 − 2e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p2 − 2e1 p2 = e1 p1 + p2 p2 − 2e1 p2 = 1 + e1 p2 p2 p1 − 2 As p2 p1 − 2 ≤ 0, the utilization Up1,p2 e1 is minimized by maximizing e1. 2. e1 ≥ p2 − p1 : 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p2 − 2e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p2 − 2e1 p2 = e1 p1 + p2 p2 − 2e1 p2 = 1 + e1 p2 p2 p1 − 2 As p2 p1 − 2 ≤ 0, the utilization Up1,p2 e1 is minimized by maximizing e1. 2. e1 ≥ p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p1 − e1. 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p2 − 2e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p2 − 2e1 p2 = e1 p1 + p2 p2 − 2e1 p2 = 1 + e1 p2 p2 p1 − 2 As p2 p1 − 2 ≤ 0, the utilization Up1,p2 e1 is minimized by maximizing e1. 2. e1 ≥ p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p1 − e1. Which gives the utilization Up1,p2 e1 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p2 − 2e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p2 − 2e1 p2 = e1 p1 + p2 p2 − 2e1 p2 = 1 + e1 p2 p2 p1 − 2 As p2 p1 − 2 ≤ 0, the utilization Up1,p2 e1 is minimized by maximizing e1. 2. e1 ≥ p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p1 − e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p2 − 2e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p2 − 2e1 p2 = e1 p1 + p2 p2 − 2e1 p2 = 1 + e1 p2 p2 p1 − 2 As p2 p1 − 2 ≤ 0, the utilization Up1,p2 e1 is minimized by maximizing e1. 2. e1 ≥ p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p1 − e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p1 − e1 p2 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p2 − 2e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p2 − 2e1 p2 = e1 p1 + p2 p2 − 2e1 p2 = 1 + e1 p2 p2 p1 − 2 As p2 p1 − 2 ≤ 0, the utilization Up1,p2 e1 is minimized by maximizing e1. 2. e1 ≥ p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p1 − e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p1 − e1 p2 = e1 p1 + p1 p2 − e1 p2 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p2 − 2e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p2 − 2e1 p2 = e1 p1 + p2 p2 − 2e1 p2 = 1 + e1 p2 p2 p1 − 2 As p2 p1 − 2 ≤ 0, the utilization Up1,p2 e1 is minimized by maximizing e1. 2. e1 ≥ p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p1 − e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p1 − e1 p2 = e1 p1 + p1 p2 − e1 p2 = p1 p2 + e1 p2 p2 p1 − 1 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p2 − 2e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p2 − 2e1 p2 = e1 p1 + p2 p2 − 2e1 p2 = 1 + e1 p2 p2 p1 − 2 As p2 p1 − 2 ≤ 0, the utilization Up1,p2 e1 is minimized by maximizing e1. 2. e1 ≥ p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p1 − e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p1 − e1 p2 = e1 p1 + p1 p2 − e1 p2 = p1 p2 + e1 p2 p2 p1 − 1 As p2 p1 − 1 ≥ 0, the utilization Up1,p2 e1 is minimized by minimizing e1. 122 Proof – Special Case (Cont.) First, minimize w.r.t. e1 (p1, p2 fixed). Two cases depending on e1: 1. e1 < p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p2 − 2e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p2 − 2e1 p2 = e1 p1 + p2 p2 − 2e1 p2 = 1 + e1 p2 p2 p1 − 2 As p2 p1 − 2 ≤ 0, the utilization Up1,p2 e1 is minimized by maximizing e1. 2. e1 ≥ p2 − p1 : Maximum RM-feasible max_e2 (with p1, p2, e1) is p1 − e1. Which gives the utilization Up1,p2 e1 = e1 p1 + max_e2 p2 = e1 p1 + p1 − e1 p2 = e1 p1 + p1 p2 − e1 p2 = p1 p2 + e1 p2 p2 p1 − 1 As p2 p1 − 1 ≥ 0, the utilization Up1,p2 e1 is minimized by minimizing e1. In both cases, the minimum of Up1,p2 e1 is attained at e1 = p2 − p1. (Both expressions defining U p1,p2 e1 give the same value for e1 = p2 − p1.) 122 Proof – Special Case (Cont.) Substitute e1 = p2 − p1 into the expression for Up1,p2 e1 : 123 Proof – Special Case (Cont.) Substitute e1 = p2 − p1 into the expression for Up1,p2 e1 : Up1,p2 p2−p1 = p1 p2 + p2 − p1 p2 p2 p1 − 1 = p1 p2 + 1 − p1 p2 p2 p1 − 1 = p1 p2 + p1 p2 p2 p1 − 1 p2 p1 − 1 = p1 p2  1 + p2 p1 − 1 2  123 Proof – Special Case (Cont.) Substitute e1 = p2 − p1 into the expression for Up1,p2 e1 : Up1,p2 p2−p1 = p1 p2 + p2 − p1 p2 p2 p1 − 1 = p1 p2 + 1 − p1 p2 p2 p1 − 1 = p1 p2 + p1 p2 p2 p1 − 1 p2 p1 − 1 = p1 p2  1 + p2 p1 − 1 2  Denoting G = p2 p1 − 1 we obtain Up1,p2 p2−p1 123 Proof – Special Case (Cont.) Substitute e1 = p2 − p1 into the expression for Up1,p2 e1 : Up1,p2 p2−p1 = p1 p2 + p2 − p1 p2 p2 p1 − 1 = p1 p2 + 1 − p1 p2 p2 p1 − 1 = p1 p2 + p1 p2 p2 p1 − 1 p2 p1 − 1 = p1 p2  1 + p2 p1 − 1 2  Denoting G = p2 p1 − 1 we obtain Up1,p2 p2−p1 = p1 p2 (1 + G2 ) 123 Proof – Special Case (Cont.) Substitute e1 = p2 − p1 into the expression for Up1,p2 e1 : Up1,p2 p2−p1 = p1 p2 + p2 − p1 p2 p2 p1 − 1 = p1 p2 + 1 − p1 p2 p2 p1 − 1 = p1 p2 + p1 p2 p2 p1 − 1 p2 p1 − 1 = p1 p2  1 + p2 p1 − 1 2  Denoting G = p2 p1 − 1 we obtain Up1,p2 p2−p1 = p1 p2 (1 + G2 ) = 1 + G2 p2/p1 123 Proof – Special Case (Cont.) Substitute e1 = p2 − p1 into the expression for Up1,p2 e1 : Up1,p2 p2−p1 = p1 p2 + p2 − p1 p2 p2 p1 − 1 = p1 p2 + 1 − p1 p2 p2 p1 − 1 = p1 p2 + p1 p2 p2 p1 − 1 p2 p1 − 1 = p1 p2  1 + p2 p1 − 1 2  Denoting G = p2 p1 − 1 we obtain Up1,p2 p2−p1 = p1 p2 (1 + G2 ) = 1 + G2 p2/p1 = 1 + G2 1 + G 123 Proof – Special Case (Cont.) Substitute e1 = p2 − p1 into the expression for Up1,p2 e1 : Up1,p2 p2−p1 = p1 p2 + p2 − p1 p2 p2 p1 − 1 = p1 p2 + 1 − p1 p2 p2 p1 − 1 = p1 p2 + p1 p2 p2 p1 − 1 p2 p1 − 1 = p1 p2  1 + p2 p1 − 1 2  Denoting G = p2 p1 − 1 we obtain Up1,p2 p2−p1 = p1 p2 (1 + G2 ) = 1 + G2 p2/p1 = 1 + G2 1 + G Differentiating w.r.t. G we get G2 + 2G − 1 (1 + G)2 which is equal to zero at G = −1 ± √ 2. Here only G = −1 + √ 2 > 0 is acceptable since the other root is negative. 123 Proof – Special Case (Cont.) Thus the minimum value of Up1,p2 e1 is 1 + ( √ 2 − 1)2 1 + ( √ 2 − 1) = 4 − 2 √ 2 √ 2 = 2( √ 2 − 1) 124 Proof – Special Case (Cont.) Thus the minimum value of Up1,p2 e1 is 1 + ( √ 2 − 1)2 1 + ( √ 2 − 1) = 4 − 2 √ 2 √ 2 = 2( √ 2 − 1) It is attained at periods satisfying G = p2 p1 − 1 = √ 2 − 1 i.e. satisfying p2 = √ 2p1. 124 Proof – Special Case (Cont.) Thus the minimum value of Up1,p2 e1 is 1 + ( √ 2 − 1)2 1 + ( √ 2 − 1) = 4 − 2 √ 2 √ 2 = 2( √ 2 − 1) It is attained at periods satisfying G = p2 p1 − 1 = √ 2 − 1 i.e. satisfying p2 = √ 2p1. The execution time e1 which at full utilization of the processor (due to max_e2) gives the minimum utilization is e1 = p2 − p1 = ( √ 2 − 1)p1 124 Proof – Special Case (Cont.) Thus the minimum value of Up1,p2 e1 is 1 + ( √ 2 − 1)2 1 + ( √ 2 − 1) = 4 − 2 √ 2 √ 2 = 2( √ 2 − 1) It is attained at periods satisfying G = p2 p1 − 1 = √ 2 − 1 i.e. satisfying p2 = √ 2p1. The execution time e1 which at full utilization of the processor (due to max_e2) gives the minimum utilization is e1 = p2 − p1 = ( √ 2 − 1)p1 and the corresponding max_e2 = p1 − e1 = p1 − (p2 − p1) = 2p1 − p2. 124 Proof – Special Case (Cont.) Thus the minimum value of Up1,p2 e1 is 1 + ( √ 2 − 1)2 1 + ( √ 2 − 1) = 4 − 2 √ 2 √ 2 = 2( √ 2 − 1) It is attained at periods satisfying G = p2 p1 − 1 = √ 2 − 1 i.e. satisfying p2 = √ 2p1. The execution time e1 which at full utilization of the processor (due to max_e2) gives the minimum utilization is e1 = p2 − p1 = ( √ 2 − 1)p1 and the corresponding max_e2 = p1 − e1 = p1 − (p2 − p1) = 2p1 − p2. Scaling to p1 = 1, we obtain a completely determined example p1 = 1 p2 = √ 2 ≈ 1.41 e1 = √ 2−1 ≈ 0.41 max_e2 = 2− √ 2 ≈ 0.59 that maximally utilizes the processor (no execution time can be increased) but has the minimum utilization 2( √ 2 − 1). 124 Proof Idea of Theorem 19 Fix periods p1 < · · · < pn so that (w.l.o.g.) pn ≤ 2p1. Then the following set of tasks has the smallest utilization among all task sets that fully utilize the processor (i.e., any increase in any execution time makes the set unschedulable). 0 p1 2p1 0 p2 0 p3 0 pn−1 0 pn ... T3 T2 T1 Tn Tn−1 ek = pk+1 − pk for k = 1, . . . , n − 1 en = pn − 2 n−1 k=1 ek = 2p1 − pn 125 Time-Demand Analysis Consider a set of n tasks T = {T1, . . . , Tn}. Recall that we consider only independent, preemptable, in phase (i.e. ϕi = 0 for all i) tasks without resource contentions. 126 Time-Demand Analysis Consider a set of n tasks T = {T1, . . . , Tn}. Recall that we consider only independent, preemptable, in phase (i.e. ϕi = 0 for all i) tasks without resource contentions. Assume that Di ≤ pi for every i, and consider an arbitrary fixed-priority algorithm. W.l.o.g. assume T1 · · · Tn. 126 Time-Demand Analysis Consider a set of n tasks T = {T1, . . . , Tn}. Recall that we consider only independent, preemptable, in phase (i.e. ϕi = 0 for all i) tasks without resource contentions. Assume that Di ≤ pi for every i, and consider an arbitrary fixed-priority algorithm. W.l.o.g. assume T1 · · · Tn. Idea: For every task Ti and every time instant t ≥ 0, compute the total execution time wi(t) (the time demand) of the first job Ji,1 and of all higher-priority jobs released up to time t. If wi(t) ≤ t for some time t ≤ Di, then Ji,1 is schedulable, and hence all jobs of Ti are schedulable. 126 Time-Demand Analysis Consider one task Ti at a time, starting with highest priority and working to lowest priority. 127 Time-Demand Analysis Consider one task Ti at a time, starting with highest priority and working to lowest priority. Focus on the first job Ji,1 of Ti. If Ji,1 makes it, all jobs of Ti will make it due to ϕi = 0. 127 Time-Demand Analysis Consider one task Ti at a time, starting with highest priority and working to lowest priority. Focus on the first job Ji,1 of Ti. If Ji,1 makes it, all jobs of Ti will make it due to ϕi = 0. At time t for t ≥ 0, the processor time demand wi(t) for this job and all higher-priority jobs released in [0, t) is bounded by wi(t) = ei + i−1 =1 t p e for 0 < t ≤ pi (Note that the smallest t for which wi(t) ≤ t is the response time of Ji,1, and hence the maximum response time of jobs in Ti). 127 Time-Demand Analysis Consider one task Ti at a time, starting with highest priority and working to lowest priority. Focus on the first job Ji,1 of Ti. If Ji,1 makes it, all jobs of Ti will make it due to ϕi = 0. At time t for t ≥ 0, the processor time demand wi(t) for this job and all higher-priority jobs released in [0, t) is bounded by wi(t) = ei + i−1 =1 t p e for 0 < t ≤ pi (Note that the smallest t for which wi(t) ≤ t is the response time of Ji,1, and hence the maximum response time of jobs in Ti). If wi(t) ≤ t for some t ≤ Di, the job Ji,1 meets its deadline Di. 127 Time-Demand Analysis Consider one task Ti at a time, starting with highest priority and working to lowest priority. Focus on the first job Ji,1 of Ti. If Ji,1 makes it, all jobs of Ti will make it due to ϕi = 0. At time t for t ≥ 0, the processor time demand wi(t) for this job and all higher-priority jobs released in [0, t) is bounded by wi(t) = ei + i−1 =1 t p e for 0 < t ≤ pi (Note that the smallest t for which wi(t) ≤ t is the response time of Ji,1, and hence the maximum response time of jobs in Ti). If wi(t) ≤ t for some t ≤ Di, the job Ji,1 meets its deadline Di. If wi(t) > t for all 0 < t ≤ Di, then the first job of the task cannot complete by its deadline. 127 Time-Demand Analysis – Example Example: T1 = (3, 1), T2 = (5, 1.5), T3 = (7, 1.25), T4 = (9, 0.5) This set of tasks is schedulable by RM even though U{T1,...,T4} = 0.85 > 0.757 = URM(4) 128 Time-Demand Analysis The time-demand function wi(t) is a staircase function 129 Time-Demand Analysis The time-demand function wi(t) is a staircase function Steps in the time-demand for a task occur at multiples of the period for higher-priority tasks 129 Time-Demand Analysis The time-demand function wi(t) is a staircase function Steps in the time-demand for a task occur at multiples of the period for higher-priority tasks The value of wi(t) − t linearly decreases from a step until the next step 129 Time-Demand Analysis The time-demand function wi(t) is a staircase function Steps in the time-demand for a task occur at multiples of the period for higher-priority tasks The value of wi(t) − t linearly decreases from a step until the next step 129 Time-Demand Analysis The time-demand function wi(t) is a staircase function Steps in the time-demand for a task occur at multiples of the period for higher-priority tasks The value of wi(t) − t linearly decreases from a step until the next step If our interest is the schedulability of a task, it suffices to check if wi(t) ≤ t at the time instants when a higher-priority job is released and at Di 129 Time-Demand Analysis The time-demand function wi(t) is a staircase function Steps in the time-demand for a task occur at multiples of the period for higher-priority tasks The value of wi(t) − t linearly decreases from a step until the next step If our interest is the schedulability of a task, it suffices to check if wi(t) ≤ t at the time instants when a higher-priority job is released and at Di Our schedulability test becomes: Compute wi(t) Check whether wi(t) ≤ t for some t equal either to Di, or to j · pk where k = 1, 2, . . . , i and j = 1, 2, . . . , Di/pk 129 Time-Demand Analysis – Comments Time-demand analysis schedulability test is more complex than the schedulable utilization test but more general: 130 Time-Demand Analysis – Comments Time-demand analysis schedulability test is more complex than the schedulable utilization test but more general: Works for any fixed-priority scheduling algorithm, provided the tasks have short response time (Di ≤ pi) Can be extended to tasks with arbitrary deadlines 130 Time-Demand Analysis – Comments Time-demand analysis schedulability test is more complex than the schedulable utilization test but more general: Works for any fixed-priority scheduling algorithm, provided the tasks have short response time (Di ≤ pi) Can be extended to tasks with arbitrary deadlines Still more efficient than exhaustive simulation. 130 Time-Demand Analysis – Comments Time-demand analysis schedulability test is more complex than the schedulable utilization test but more general: Works for any fixed-priority scheduling algorithm, provided the tasks have short response time (Di ≤ pi) Can be extended to tasks with arbitrary deadlines Still more efficient than exhaustive simulation. Assuming that the tasks are in phase the time demand analysis is complete. 130 Time-Demand Analysis – Comments Time-demand analysis schedulability test is more complex than the schedulable utilization test but more general: Works for any fixed-priority scheduling algorithm, provided the tasks have short response time (Di ≤ pi) Can be extended to tasks with arbitrary deadlines Still more efficient than exhaustive simulation. Assuming that the tasks are in phase the time demand analysis is complete. We have considered the time demand analysis for tasks in phase. In particular, we used the fact that the first job has the maximum response time. 130 Time-Demand Analysis – Comments Time-demand analysis schedulability test is more complex than the schedulable utilization test but more general: Works for any fixed-priority scheduling algorithm, provided the tasks have short response time (Di ≤ pi) Can be extended to tasks with arbitrary deadlines Still more efficient than exhaustive simulation. Assuming that the tasks are in phase the time demand analysis is complete. We have considered the time demand analysis for tasks in phase. In particular, we used the fact that the first job has the maximum response time. This is not true if the jobs are not in phase, we need to identify the so called critical instant, the time instant in which the system is most loaded, and has its worst response time. 130 Critical Instant – Formally A critical instant tcrit of a task Ti is a time instant in which a job Ji,k in Ti is released so that Ji,k either does not meet its deadline, or has the maximum response time of all jobs in Ti. Theorem 20 Assume Di ≤ pi for every i and use a fixed-priority algorithm. A critical instant of a task Ti occurs when one of its jobs Ji,k is released at the same time with a job from every higher-priority task. 131 Critical Instant – Formally A critical instant tcrit of a task Ti is a time instant in which a job Ji,k in Ti is released so that Ji,k either does not meet its deadline, or has the maximum response time of all jobs in Ti. Theorem 20 Assume Di ≤ pi for every i and use a fixed-priority algorithm. A critical instant of a task Ti occurs when one of its jobs Ji,k is released at the same time with a job from every higher-priority task. Note that the situation described in the theorem does not have to occur if tasks are not in phase! 131 Critical Instant – Formally A critical instant tcrit of a task Ti is a time instant in which a job Ji,k in Ti is released so that Ji,k either does not meet its deadline, or has the maximum response time of all jobs in Ti. Theorem 20 Assume Di ≤ pi for every i and use a fixed-priority algorithm. A critical instant of a task Ti occurs when one of its jobs Ji,k is released at the same time with a job from every higher-priority task. Note that the situation described in the theorem does not have to occur if tasks are not in phase! To get such a critical instant, we set phases of all tasks to zero, which gives a new set of tasks T = {T1 , . . . , Tn}. Denote jobs of Ti by Ji,k . 131 Critical Instant – Formally A critical instant tcrit of a task Ti is a time instant in which a job Ji,k in Ti is released so that Ji,k either does not meet its deadline, or has the maximum response time of all jobs in Ti. Theorem 20 Assume Di ≤ pi for every i and use a fixed-priority algorithm. A critical instant of a task Ti occurs when one of its jobs Ji,k is released at the same time with a job from every higher-priority task. Note that the situation described in the theorem does not have to occur if tasks are not in phase! To get such a critical instant, we set phases of all tasks to zero, which gives a new set of tasks T = {T1 , . . . , Tn}. Denote jobs of Ti by Ji,k . Corollary 21 Assume Di ≤ pi for every i and use a fixed-priority algorithm. Consider a critical instant tcrit of a task Ti. If the job Ji,k released at tcrit misses its deadline, then Ji,1 misses its deadline. Otherwise, the response time of Ji,k is at most as large as the response time of Ji,1 . 131 Critical Instant and Schedulability Tests Now we can (partially) decide schedulability of a given set of periodic tasks T = {T1, . . . , Tn} (assuming Di ≤ pi) as follows: 132 Critical Instant and Schedulability Tests Now we can (partially) decide schedulability of a given set of periodic tasks T = {T1, . . . , Tn} (assuming Di ≤ pi) as follows: Set phases of all tasks to zero, which gives a new set of tasks T = {T1 , . . . , Tn}. 132 Critical Instant and Schedulability Tests Now we can (partially) decide schedulability of a given set of periodic tasks T = {T1, . . . , Tn} (assuming Di ≤ pi) as follows: Set phases of all tasks to zero, which gives a new set of tasks T = {T1 , . . . , Tn}. Decide schedulability of T , e.g., using the timed-demand analysis. 132 Critical Instant and Schedulability Tests Now we can (partially) decide schedulability of a given set of periodic tasks T = {T1, . . . , Tn} (assuming Di ≤ pi) as follows: Set phases of all tasks to zero, which gives a new set of tasks T = {T1 , . . . , Tn}. Decide schedulability of T , e.g., using the timed-demand analysis. If T if schedulable, then also T is schedulable. 132 Critical Instant and Schedulability Tests Now we can (partially) decide schedulability of a given set of periodic tasks T = {T1, . . . , Tn} (assuming Di ≤ pi) as follows: Set phases of all tasks to zero, which gives a new set of tasks T = {T1 , . . . , Tn}. Decide schedulability of T , e.g., using the timed-demand analysis. If T if schedulable, then also T is schedulable. If T is not schedulable, then T does not have to be schedulable. But may be schedulable, which make the time-demand analysis incomplete in general for tasks not in phase. 132 Dynamic vs Fixed Priority EDF pros: optimal very simple and complete test for schedulability 133 Dynamic vs Fixed Priority EDF pros: optimal very simple and complete test for schedulability cons: difficult to predict which job misses its deadline strictly following EDF in case of overloads assigns higher priority to jobs that missed their deadlines larger scheduling overhead 133 Dynamic vs Fixed Priority EDF pros: optimal very simple and complete test for schedulability cons: difficult to predict which job misses its deadline strictly following EDF in case of overloads assigns higher priority to jobs that missed their deadlines larger scheduling overhead DM (RM) pros: easier to predict which job misses its deadline (in particular, tasks are not blocked by lower priority tasks) easy implementation with little scheduling overhead (optimal in some cases often occurring in practice) 133 Dynamic vs Fixed Priority EDF pros: optimal very simple and complete test for schedulability cons: difficult to predict which job misses its deadline strictly following EDF in case of overloads assigns higher priority to jobs that missed their deadlines larger scheduling overhead DM (RM) pros: easier to predict which job misses its deadline (in particular, tasks are not blocked by lower priority tasks) easy implementation with little scheduling overhead (optimal in some cases often occurring in practice) cons: not optimal incomplete and more involved tests for schedulability 133 Real-Time Scheduling Priority-Driven Scheduling Aperiodic Tasks 134 Current Assumptions We slightly abuse notation and talk about preriodic/aperiodic/sporadic jobs meaning jobs of periodic/aperiodic/sporadic tasks. Single processor Fixed number, n, of independent periodic tasks Jobs can be preempted at any time and never suspend themselves, no resource contentions 135 Current Assumptions We slightly abuse notation and talk about preriodic/aperiodic/sporadic jobs meaning jobs of periodic/aperiodic/sporadic tasks. Single processor Fixed number, n, of independent periodic tasks Jobs can be preempted at any time and never suspend themselves, no resource contentions Aperiodic jobs exist They are independent of each other and of the periodic tasks. Can be preempted at any time. No sporadic jobs (for now) Jobs are scheduled using a priority driven algorithm 135 Scheduling Aperiodic Jobs Consider: A set T = {T1, . . . , Tn} of periodic tasks An aperiodic task A 136 Scheduling Aperiodic Jobs Consider: A set T = {T1, . . . , Tn} of periodic tasks An aperiodic task A Recall that: A schedule is feasible if all jobs with hard real-time constraints complete before their deadlines ⇒ This includes all periodic jobs 136 Scheduling Aperiodic Jobs Consider: A set T = {T1, . . . , Tn} of periodic tasks An aperiodic task A Recall that: A schedule is feasible if all jobs with hard real-time constraints complete before their deadlines ⇒ This includes all periodic jobs A scheduling algorithm is optimal if it always produces a feasible schedule whenever such a schedule exists, and if a cost function is given, minimizes the cost 136 Background Scheduling of Aperiodic Jobs Aperiodic jobs are scheduled and executed only at times when there are no periodic jobs ready for execution 137 Background Scheduling of Aperiodic Jobs Aperiodic jobs are scheduled and executed only at times when there are no periodic jobs ready for execution Advantages Clearly produces feasible schedules Extremely simple to implement 137 Background Scheduling of Aperiodic Jobs Aperiodic jobs are scheduled and executed only at times when there are no periodic jobs ready for execution Advantages Clearly produces feasible schedules Extremely simple to implement Disadvantages Not optimal since the execution of aperiodic jobs may be unnecessarily delayed 137 Background Scheduling of Aperiodic Jobs Aperiodic jobs are scheduled and executed only at times when there are no periodic jobs ready for execution Advantages Clearly produces feasible schedules Extremely simple to implement Disadvantages Not optimal since the execution of aperiodic jobs may be unnecessarily delayed Example: T1 = (3, 1), T2 = (10, 4) 137 Polled Execution of Aperiodic Jobs We may use a polling server 138 Polled Execution of Aperiodic Jobs We may use a polling server A periodic task (ps, es) scheduled according to the periodic algorithm, generally as the highest priority task 138 Polled Execution of Aperiodic Jobs We may use a polling server A periodic task (ps, es) scheduled according to the periodic algorithm, generally as the highest priority task When executed, it examines the aperiodic job queue 138 Polled Execution of Aperiodic Jobs We may use a polling server A periodic task (ps, es) scheduled according to the periodic algorithm, generally as the highest priority task When executed, it examines the aperiodic job queue If an aperiodic job is in the queue, it is executed for up to es time units 138 Polled Execution of Aperiodic Jobs We may use a polling server A periodic task (ps, es) scheduled according to the periodic algorithm, generally as the highest priority task When executed, it examines the aperiodic job queue If an aperiodic job is in the queue, it is executed for up to es time units If the aperiodic queue is empty, the polling server self-suspends, giving up its execution slot 138 Polled Execution of Aperiodic Jobs We may use a polling server A periodic task (ps, es) scheduled according to the periodic algorithm, generally as the highest priority task When executed, it examines the aperiodic job queue If an aperiodic job is in the queue, it is executed for up to es time units If the aperiodic queue is empty, the polling server self-suspends, giving up its execution slot The server does not wake-up once it has self-suspended, aperiodic jobs which become active during the period are not considered for execution until the next period begins 138 Polled Execution of Aperiodic Jobs We may use a polling server A periodic task (ps, es) scheduled according to the periodic algorithm, generally as the highest priority task When executed, it examines the aperiodic job queue If an aperiodic job is in the queue, it is executed for up to es time units If the aperiodic queue is empty, the polling server self-suspends, giving up its execution slot The server does not wake-up once it has self-suspended, aperiodic jobs which become active during the period are not considered for execution until the next period begins Simple to prove correctness, performance less than ideal – executes aperiodic jobs in particular timeslots 138 Polled Execution of Aperiodic Jobs Example: T1 = (3, 1), T2 = (10, 4), poller = (2.5, 0.5) 139 Polled Execution of Aperiodic Jobs Example: T1 = (3, 1), T2 = (10, 4), poller = (2.5, 0.5) Can we do better? 139 Polled Execution of Aperiodic Jobs Example: T1 = (3, 1), T2 = (10, 4), poller = (2.5, 0.5) Can we do better? Yes, polling server is a special case of periodic-server for aperiodic jobs. 139 Periodic Severs – Terminology periodic server = a task that behaves much like a periodic task, but is created for the purpose of executing aperiodic jobs 140 Periodic Severs – Terminology periodic server = a task that behaves much like a periodic task, but is created for the purpose of executing aperiodic jobs A periodic server, TS = (pS, eS) 140 Periodic Severs – Terminology periodic server = a task that behaves much like a periodic task, but is created for the purpose of executing aperiodic jobs A periodic server, TS = (pS, eS) pS is a period of the server 140 Periodic Severs – Terminology periodic server = a task that behaves much like a periodic task, but is created for the purpose of executing aperiodic jobs A periodic server, TS = (pS, eS) pS is a period of the server eS is the (maximal) budget of the server 140 Periodic Severs – Terminology periodic server = a task that behaves much like a periodic task, but is created for the purpose of executing aperiodic jobs A periodic server, TS = (pS, eS) pS is a period of the server eS is the (maximal) budget of the server The budget can be consumed and replenished; the budget is exhausted when it reaches 0 (Periodic servers differ in how they consume and replenish the budget) 140 Periodic Severs – Terminology periodic server = a task that behaves much like a periodic task, but is created for the purpose of executing aperiodic jobs A periodic server, TS = (pS, eS) pS is a period of the server eS is the (maximal) budget of the server The budget can be consumed and replenished; the budget is exhausted when it reaches 0 (Periodic servers differ in how they consume and replenish the budget) A periodic server is 140 Periodic Severs – Terminology periodic server = a task that behaves much like a periodic task, but is created for the purpose of executing aperiodic jobs A periodic server, TS = (pS, eS) pS is a period of the server eS is the (maximal) budget of the server The budget can be consumed and replenished; the budget is exhausted when it reaches 0 (Periodic servers differ in how they consume and replenish the budget) A periodic server is backlogged whenever the aperiodic job queue is non-empty 140 Periodic Severs – Terminology periodic server = a task that behaves much like a periodic task, but is created for the purpose of executing aperiodic jobs A periodic server, TS = (pS, eS) pS is a period of the server eS is the (maximal) budget of the server The budget can be consumed and replenished; the budget is exhausted when it reaches 0 (Periodic servers differ in how they consume and replenish the budget) A periodic server is backlogged whenever the aperiodic job queue is non-empty idle if the queue is empty 140 Periodic Severs – Terminology periodic server = a task that behaves much like a periodic task, but is created for the purpose of executing aperiodic jobs A periodic server, TS = (pS, eS) pS is a period of the server eS is the (maximal) budget of the server The budget can be consumed and replenished; the budget is exhausted when it reaches 0 (Periodic servers differ in how they consume and replenish the budget) A periodic server is backlogged whenever the aperiodic job queue is non-empty idle if the queue is empty eligible if it is backlogged and the budget is not exhausted 140 Periodic Severs – Terminology periodic server = a task that behaves much like a periodic task, but is created for the purpose of executing aperiodic jobs A periodic server, TS = (pS, eS) pS is a period of the server eS is the (maximal) budget of the server The budget can be consumed and replenished; the budget is exhausted when it reaches 0 (Periodic servers differ in how they consume and replenish the budget) A periodic server is backlogged whenever the aperiodic job queue is non-empty idle if the queue is empty eligible if it is backlogged and the budget is not exhausted When a periodic server is eligible, it is scheduled as any other periodic task with parameters (pS, eS) 140 Periodic Severs Each periodic server is thus specified by consumption rules: How the budget is consumed replenishment rules: When and how the budget is replenished 141 Periodic Severs Each periodic server is thus specified by consumption rules: How the budget is consumed replenishment rules: When and how the budget is replenished Polling server consumption rules: Whenever the server executes, the budget is consumed at the rate one per unit time. Whenever the server becomes idle, the budget gets immediately exhausted replenishment rule: At each time instant k · pS replenish the budget to eS 141 Periodic Severs Deferrable sever Consumption rules: 142 Periodic Severs Deferrable sever Consumption rules: The budget is consumed at the rate of one per unit time whenever the server executes 142 Periodic Severs Deferrable sever Consumption rules: The budget is consumed at the rate of one per unit time whenever the server executes Unused budget is retained throughout the period, to be used whenever there are aperiodic jobs to execute (i.e. instead of discarding the budget if no aperiodic job to execute at start of period, keep in the hope a job arrives) 142 Periodic Severs Deferrable sever Consumption rules: The budget is consumed at the rate of one per unit time whenever the server executes Unused budget is retained throughout the period, to be used whenever there are aperiodic jobs to execute (i.e. instead of discarding the budget if no aperiodic job to execute at start of period, keep in the hope a job arrives) Replenishment rule: 142 Periodic Severs Deferrable sever Consumption rules: The budget is consumed at the rate of one per unit time whenever the server executes Unused budget is retained throughout the period, to be used whenever there are aperiodic jobs to execute (i.e. instead of discarding the budget if no aperiodic job to execute at start of period, keep in the hope a job arrives) Replenishment rule: The budget is set to eS at multiples of the period i.e. time instants k · pS for k = 0, 1, 2, . . . (Note that the server is not able tu cumulate the budget over periods) 142 Periodic Severs Deferrable sever Consumption rules: The budget is consumed at the rate of one per unit time whenever the server executes Unused budget is retained throughout the period, to be used whenever there are aperiodic jobs to execute (i.e. instead of discarding the budget if no aperiodic job to execute at start of period, keep in the hope a job arrives) Replenishment rule: The budget is set to eS at multiples of the period i.e. time instants k · pS for k = 0, 1, 2, . . . (Note that the server is not able tu cumulate the budget over periods) We consider both Fixed-priority scheduling Dynamic-priority scheduling (EDF) 142 Deferrable Server – RM Here the tasks are scheduled using RM. Is it possible to increase the budget of the server to 1.5 ? 143 Deferrable Server – RM Consider T1 = (3.5, 1.5), T2 = (6.5, 0.5) and TDS = (3, 1) A critical instant for T1 = (3.5, 1.5) looks as follows: i.e. increasing the budget above 1 may cause T1 to miss its deadline 144 Deferrable Server – Critical Instant Lemma 22 Assume a fixed-priority scheduling algorithm. Assume that Di ≤ pi and that the deferrable server (pS, eS) has the highest priority among all tasks. 145 Deferrable Server – Critical Instant Lemma 22 Assume a fixed-priority scheduling algorithm. Assume that Di ≤ pi and that the deferrable server (pS, eS) has the highest priority among all tasks. Then a critical instant of every periodic task Ti occurs at a time t0 when all of the following are true: 145 Deferrable Server – Critical Instant Lemma 22 Assume a fixed-priority scheduling algorithm. Assume that Di ≤ pi and that the deferrable server (pS, eS) has the highest priority among all tasks. Then a critical instant of every periodic task Ti occurs at a time t0 when all of the following are true: One of its jobs Ji,c is released at t0 145 Deferrable Server – Critical Instant Lemma 22 Assume a fixed-priority scheduling algorithm. Assume that Di ≤ pi and that the deferrable server (pS, eS) has the highest priority among all tasks. Then a critical instant of every periodic task Ti occurs at a time t0 when all of the following are true: One of its jobs Ji,c is released at t0 A job in every higher-priority periodic task is released at t0 145 Deferrable Server – Critical Instant Lemma 22 Assume a fixed-priority scheduling algorithm. Assume that Di ≤ pi and that the deferrable server (pS, eS) has the highest priority among all tasks. Then a critical instant of every periodic task Ti occurs at a time t0 when all of the following are true: One of its jobs Ji,c is released at t0 A job in every higher-priority periodic task is released at t0 The budget of the server is eS at t0, one or more aperiodic jobs are released at t0, and they keep the server backlogged hereafter 145 Deferrable Server – Critical Instant Lemma 22 Assume a fixed-priority scheduling algorithm. Assume that Di ≤ pi and that the deferrable server (pS, eS) has the highest priority among all tasks. Then a critical instant of every periodic task Ti occurs at a time t0 when all of the following are true: One of its jobs Ji,c is released at t0 A job in every higher-priority periodic task is released at t0 The budget of the server is eS at t0, one or more aperiodic jobs are released at t0, and they keep the server backlogged hereafter The next replenishment time of the server is t0 + eS 145 Deferrable Server – Critical Instant Assume TDS T1 T2 · · · Tn (i.e. T1 has the highest pririty and Tn lowest) 146 Deferrable Server – Time Demand Analysis Assume that the deferrable server has the highest priority The definition of critical instant is identical to that for the periodic tasks without the deferrable server + the worst-case requirements for the server 147 Deferrable Server – Time Demand Analysis Assume that the deferrable server has the highest priority The definition of critical instant is identical to that for the periodic tasks without the deferrable server + the worst-case requirements for the server Thus the expression for the time-demand function becomes wi(t) = ei + i−1 k=1 t pk ek +eS + t − eS pS eS for 0 < t ≤ pi 147 Deferrable Server – Time Demand Analysis Assume that the deferrable server has the highest priority The definition of critical instant is identical to that for the periodic tasks without the deferrable server + the worst-case requirements for the server Thus the expression for the time-demand function becomes wi(t) = ei + i−1 k=1 t pk ek +eS + t − eS pS eS for 0 < t ≤ pi To determine whether the task Ti is schedulable, we simply check whether wi(t) ≤ t for some t ≤ Di Note that this is a sufficient condition, not necessary. 147 Deferrable Server – Time Demand Analysis Assume that the deferrable server has the highest priority The definition of critical instant is identical to that for the periodic tasks without the deferrable server + the worst-case requirements for the server Thus the expression for the time-demand function becomes wi(t) = ei + i−1 k=1 t pk ek +eS + t − eS pS eS for 0 < t ≤ pi To determine whether the task Ti is schedulable, we simply check whether wi(t) ≤ t for some t ≤ Di Note that this is a sufficient condition, not necessary. Check whether wi(t) ≤ t for some t equal either to Di, or to j · pk where k = 1, 2, . . . , i and j = 1, 2, . . . , Di/pk , or to eS, eS + pS, eS + 2pS, . . . , eS + (Di − eS)/pS pS 147 Deferrable Server – Time Demand Analysis TDS = (3, 1.0), T1 = (3.5, 1.5), T2 = (6.5, 0.5) 148 Deferrable Server – Schedulable Utilization No maximum schedulable utilization is known in general 149 Deferrable Server – Schedulable Utilization No maximum schedulable utilization is known in general A special case: A set T of n independent, preemptable periodic tasks whose periods satisfy pS < p1 < · · · < pn < 2pS and pn > pS + eS and whose relative deadlines are equal to their respective periods, can be scheduled according to RM with a deferrable server provided that UT ≤ URM/DS(n) := (n − 1)   uS + 2 uS + 1 1 n−1 − 1   where uS = eS/pS 149 Deferrable Server – EDF Here the tasks are scheduled using EDF. TDS = (3, 1), T1 = (2, 3.5, 1.5), T2 = (6.5, 0.5) 150 Deferrable Server – EDF – Schedulability Theorem 23 A set of n independent, preemptable, periodic tasks satisfying pi ≤ Di for all 1 ≤ i ≤ n is schedulable with a deferrable server with period pS, execution budget eS and utilization uS = eS/pS according to the EDF algorithm if: n k=1 uk + uS 1 + pS − eS mini Di ≤ 1 151 Sporadic Server – Motivation Problem with polling server: TPS = (pS, eS) executes aperiodic jobs at the multiples of pS Problem with deferrable server: TDS = (pS, eS) may delay lower priority jobs longer than a periodic task with the same parameters (pS, eS) Therefore special version of time-demand analysis and utilization bounds are needed. 152 Sporadic Server – Motivation Problem with polling server: TPS = (pS, eS) executes aperiodic jobs at the multiples of pS Problem with deferrable server: TDS = (pS, eS) may delay lower priority jobs longer than a periodic task with the same parameters (pS, eS) Therefore special version of time-demand analysis and utilization bounds are needed. Sporadic server TSS = (eS, pS) may execute jobs “in the middle” of its period never delays periodic tasks longer than the periodic task (pS, eS) Thus can be tested for schedulability as an ordinary periodic task. 152 Sporadic Server – Motivation Problem with polling server: TPS = (pS, eS) executes aperiodic jobs at the multiples of pS Problem with deferrable server: TDS = (pS, eS) may delay lower priority jobs longer than a periodic task with the same parameters (pS, eS) Therefore special version of time-demand analysis and utilization bounds are needed. Sporadic server TSS = (eS, pS) may execute jobs “in the middle” of its period never delays periodic tasks longer than the periodic task (pS, eS) Thus can be tested for schedulability as an ordinary periodic task. Originally proposed by Sprunt, Sha, Lehoczky in 1989 original version contains a bug which allows longer delay of lower priority jobs Part of POSIX standard also incorrect as observed and (probably) corrected by Stanovich in 2010 152 Very Simple Sporadic Server For simplicity, we consider only fixed priority scheduling, i.e., assume T1 T2 · · · Tn and consider a sporadic server TSS = (pS, eS) with the highest priority Notation: tr = the latest replenishment time tf = first instant after tr at which server begins to execute nr = a variable representing the next replenishment 153 Very Simple Sporadic Server For simplicity, we consider only fixed priority scheduling, i.e., assume T1 T2 · · · Tn and consider a sporadic server TSS = (pS, eS) with the highest priority Notation: tr = the latest replenishment time tf = first instant after tr at which server begins to execute nr = a variable representing the next replenishment Consumption rule: The budget is consumed (at the rate of one per unit time) whenever the current time t satisfies t ≥ tf 153 Very Simple Sporadic Server For simplicity, we consider only fixed priority scheduling, i.e., assume T1 T2 · · · Tn and consider a sporadic server TSS = (pS, eS) with the highest priority Notation: tr = the latest replenishment time tf = first instant after tr at which server begins to execute nr = a variable representing the next replenishment Consumption rule: The budget is consumed (at the rate of one per unit time) whenever the current time t satisfies t ≥ tf Replenishment rules: At the beginning, tr = nr = 0 153 Very Simple Sporadic Server For simplicity, we consider only fixed priority scheduling, i.e., assume T1 T2 · · · Tn and consider a sporadic server TSS = (pS, eS) with the highest priority Notation: tr = the latest replenishment time tf = first instant after tr at which server begins to execute nr = a variable representing the next replenishment Consumption rule: The budget is consumed (at the rate of one per unit time) whenever the current time t satisfies t ≥ tf Replenishment rules: At the beginning, tr = nr = 0 Whenever the current time is equal to nr , the budget is set to eS and tr is set to the current time 153 Very Simple Sporadic Server For simplicity, we consider only fixed priority scheduling, i.e., assume T1 T2 · · · Tn and consider a sporadic server TSS = (pS, eS) with the highest priority Notation: tr = the latest replenishment time tf = first instant after tr at which server begins to execute nr = a variable representing the next replenishment Consumption rule: The budget is consumed (at the rate of one per unit time) whenever the current time t satisfies t ≥ tf Replenishment rules: At the beginning, tr = nr = 0 Whenever the current time is equal to nr , the budget is set to eS and tr is set to the current time At the first instant tf after tr at which the server starts executing, nr is set to tf + pS (Note that such server resembles a periodic task with the highest priority whose jobs are released at times tf and execution times are at most eS ) 153 Very Simple Sporadic/Background Server New notation: tr = the latest replenishment time tf = first instant after tr at which server begins to execute and at least one task of T is not idle nr = a variable representing the next replenishment 154 Very Simple Sporadic/Background Server New notation: tr = the latest replenishment time tf = first instant after tr at which server begins to execute and at least one task of T is not idle nr = a variable representing the next replenishment Consumption rule: The budget is consumed (at the rate of one per unit time) whenever the current time t satisfies t ≥ tf and at least one task of T is not idle 154 Very Simple Sporadic/Background Server New notation: tr = the latest replenishment time tf = first instant after tr at which server begins to execute and at least one task of T is not idle nr = a variable representing the next replenishment Consumption rule: The budget is consumed (at the rate of one per unit time) whenever the current time t satisfies t ≥ tf and at least one task of T is not idle Replenishment rules: At the beginning, tr = nr = 0 154 Very Simple Sporadic/Background Server New notation: tr = the latest replenishment time tf = first instant after tr at which server begins to execute and at least one task of T is not idle nr = a variable representing the next replenishment Consumption rule: The budget is consumed (at the rate of one per unit time) whenever the current time t satisfies t ≥ tf and at least one task of T is not idle Replenishment rules: At the beginning, tr = nr = 0 Whenever the current time is equal to nr , the budget is set to eS and tr is set to the current time 154 Very Simple Sporadic/Background Server New notation: tr = the latest replenishment time tf = first instant after tr at which server begins to execute and at least one task of T is not idle nr = a variable representing the next replenishment Consumption rule: The budget is consumed (at the rate of one per unit time) whenever the current time t satisfies t ≥ tf and at least one task of T is not idle Replenishment rules: At the beginning, tr = nr = 0 Whenever the current time is equal to nr , the budget is set to eS and tr is set to the current time At the beginning of an idle interval of T , the budget is set to eS and nr is set to the end of this interval 154 Very Simple Sporadic/Background Server New notation: tr = the latest replenishment time tf = first instant after tr at which server begins to execute and at least one task of T is not idle nr = a variable representing the next replenishment Consumption rule: The budget is consumed (at the rate of one per unit time) whenever the current time t satisfies t ≥ tf and at least one task of T is not idle Replenishment rules: At the beginning, tr = nr = 0 Whenever the current time is equal to nr , the budget is set to eS and tr is set to the current time At the beginning of an idle interval of T , the budget is set to eS and nr is set to the end of this interval At the first instant tf after tr at which the server starts executing and T is not idle, nr is set to tf + pS This combines the very simple sporadic server with background scheduling. 154 Very Simple Sporadic Server Correctness (informally): Assuming that T never idles, the sporadic server resembles a periodic task with the highest priority whose jobs are released at times tf and execution times are at most eS 155 Very Simple Sporadic Server Correctness (informally): Assuming that T never idles, the sporadic server resembles a periodic task with the highest priority whose jobs are released at times tf and execution times are at most eS Whenever T idles, the sporadic server executes in the background, i.e., does not block any periodic task, hence does not consume the budget 155 Very Simple Sporadic Server Correctness (informally): Assuming that T never idles, the sporadic server resembles a periodic task with the highest priority whose jobs are released at times tf and execution times are at most eS Whenever T idles, the sporadic server executes in the background, i.e., does not block any periodic task, hence does not consume the budget Whenever an idle interval of T ends, we may treat this situation as a restart of the system with possibly different phases of tasks (so that it is safe to have the budget equal to eS) 155 Very Simple Sporadic Server Correctness (informally): Assuming that T never idles, the sporadic server resembles a periodic task with the highest priority whose jobs are released at times tf and execution times are at most eS Whenever T idles, the sporadic server executes in the background, i.e., does not block any periodic task, hence does not consume the budget Whenever an idle interval of T ends, we may treat this situation as a restart of the system with possibly different phases of tasks (so that it is safe to have the budget equal to eS) Note that in both versions of the sporadic server, eS units of execution time are available for aper. jobs every pS units of time This means that if the server is always backlogged, then it executes for eS time units every pS units of time 155 Real-Time Scheduling Priority-Driven Scheduling Sporadic Tasks 156 Current Assumptions Single processor Fixed number, n, of independent periodic tasks, T1, . . . , Tn where Ti = (ϕi, pi, ei, Di) Jobs can be preempted at any time and never suspend themselves No resource contentions Sporadic tasks Independent of the periodic tasks Jobs can be preempted at any time Aperiodic tasks For simplicity scheduled in the background – i.e. we may ignore them Jobs are scheduled using a priority driven algorithm 157 Our situation Based on the execution time and deadline of each newly arrived sporadic job, decide whether to accept or reject the job 158 Our situation Based on the execution time and deadline of each newly arrived sporadic job, decide whether to accept or reject the job Accepting the job implies that the job will complete within its deadline, without causing any periodic job or previously accepted sporadic job to miss its deadline 158 Our situation Based on the execution time and deadline of each newly arrived sporadic job, decide whether to accept or reject the job Accepting the job implies that the job will complete within its deadline, without causing any periodic job or previously accepted sporadic job to miss its deadline Do not accept a sporadic job if cannot guarantee it will meet its deadline 158 Scheduling Sporadic Jobs – Correctness and Optimality A correct schedule is one where all periodic tasks, and all sporadic jobs that have been accepted, meet their deadlines 159 Scheduling Sporadic Jobs – Correctness and Optimality A correct schedule is one where all periodic tasks, and all sporadic jobs that have been accepted, meet their deadlines A scheduling algorithm supporting sporadic jobs is a correct algorithm if it only produces correct schedules for the system 159 Scheduling Sporadic Jobs – Correctness and Optimality A correct schedule is one where all periodic tasks, and all sporadic jobs that have been accepted, meet their deadlines A scheduling algorithm supporting sporadic jobs is a correct algorithm if it only produces correct schedules for the system A sporadic job scheduling algorithm is optimal if the following holds: It accepts a new sporadic job and schedules that job to complete by its deadline iff the new job can be correctly scheduled to complete in time 159 Model for Scheduling Sporadic Jobs with EDF Assume that all jobs in the system are scheduled by EDF if more sporadic jobs are released at the same time their acceptance test is done in the EDF order 160 Model for Scheduling Sporadic Jobs with EDF Assume that all jobs in the system are scheduled by EDF if more sporadic jobs are released at the same time their acceptance test is done in the EDF order Definitions: Sporadic jobs are denoted by S(r, d, e) where r is the release time, d the (absolute) deadline, and e is the maximum execution time 160 Model for Scheduling Sporadic Jobs with EDF Assume that all jobs in the system are scheduled by EDF if more sporadic jobs are released at the same time their acceptance test is done in the EDF order Definitions: Sporadic jobs are denoted by S(r, d, e) where r is the release time, d the (absolute) deadline, and e is the maximum execution time The density of S(r, d, e) is defined by e/(d − r) 160 Model for Scheduling Sporadic Jobs with EDF Assume that all jobs in the system are scheduled by EDF if more sporadic jobs are released at the same time their acceptance test is done in the EDF order Definitions: Sporadic jobs are denoted by S(r, d, e) where r is the release time, d the (absolute) deadline, and e is the maximum execution time The density of S(r, d, e) is defined by e/(d − r) The total density of a set of sporadic jobs is the sum of densities of these jobs 160 Model for Scheduling Sporadic Jobs with EDF Assume that all jobs in the system are scheduled by EDF if more sporadic jobs are released at the same time their acceptance test is done in the EDF order Definitions: Sporadic jobs are denoted by S(r, d, e) where r is the release time, d the (absolute) deadline, and e is the maximum execution time The density of S(r, d, e) is defined by e/(d − r) The total density of a set of sporadic jobs is the sum of densities of these jobs The sporadic job S(r, d, e) is active at time t iff t ∈ (r, d] 160 Model for Scheduling Sporadic Jobs with EDF Assume that all jobs in the system are scheduled by EDF if more sporadic jobs are released at the same time their acceptance test is done in the EDF order Definitions: Sporadic jobs are denoted by S(r, d, e) where r is the release time, d the (absolute) deadline, and e is the maximum execution time The density of S(r, d, e) is defined by e/(d − r) The total density of a set of sporadic jobs is the sum of densities of these jobs The sporadic job S(r, d, e) is active at time t iff t ∈ (r, d] Note that each job of a periodic task (ϕ, p, e, D) can be seen as a sporadic job; to simplify, we assume that always D ≤ p. This in turn means that there is always at most one job of a given task active at a given time instant. For every job of this task released at r with abs. deadline d, we obtain the density e/(d − r) = e/D 160 Schedulability of Sporadic Jobs with EDF Theorem 24 A set of independent preemptable sporadic jobs is schedulable according to EDF if at every time instant t the total density of all jobs active at time t is at most one. 161 Schedulability of Sporadic Jobs with EDF Theorem 24 A set of independent preemptable sporadic jobs is schedulable according to EDF if at every time instant t the total density of all jobs active at time t is at most one. Proof. By contradiction, suppose that a job misses its deadline at t, no deadlines missed before t 161 Schedulability of Sporadic Jobs with EDF Theorem 24 A set of independent preemptable sporadic jobs is schedulable according to EDF if at every time instant t the total density of all jobs active at time t is at most one. Proof. By contradiction, suppose that a job misses its deadline at t, no deadlines missed before t Let t1 be the supremum of time instants before t when either the system idles, or a job with a deadline after t executes 161 Schedulability of Sporadic Jobs with EDF Theorem 24 A set of independent preemptable sporadic jobs is schedulable according to EDF if at every time instant t the total density of all jobs active at time t is at most one. Proof. By contradiction, suppose that a job misses its deadline at t, no deadlines missed before t Let t1 be the supremum of time instants before t when either the system idles, or a job with a deadline after t executes Suppose that jobs J1, . . . , Jk execute in [t1, t] and that they are ordered w.r.t. increasing deadline (Jk misses its deadline at t) 161 Schedulability of Sporadic Jobs with EDF Theorem 24 A set of independent preemptable sporadic jobs is schedulable according to EDF if at every time instant t the total density of all jobs active at time t is at most one. Proof. By contradiction, suppose that a job misses its deadline at t, no deadlines missed before t Let t1 be the supremum of time instants before t when either the system idles, or a job with a deadline after t executes Suppose that jobs J1, . . . , Jk execute in [t1, t] and that they are ordered w.r.t. increasing deadline (Jk misses its deadline at t) Let L be the number of releases and completions in [t1, t], denote by ti the i-th time instant when i-th such event occurs (the first one occurs at t1 and we denote by tL+1 the time instant t) 161 Schedulability of Sporadic Jobs with EDF Theorem 24 A set of independent preemptable sporadic jobs is schedulable according to EDF if at every time instant t the total density of all jobs active at time t is at most one. Proof. By contradiction, suppose that a job misses its deadline at t, no deadlines missed before t Let t1 be the supremum of time instants before t when either the system idles, or a job with a deadline after t executes Suppose that jobs J1, . . . , Jk execute in [t1, t] and that they are ordered w.r.t. increasing deadline (Jk misses its deadline at t) Let L be the number of releases and completions in [t1, t], denote by ti the i-th time instant when i-th such event occurs (the first one occurs at t1 and we denote by tL+1 the time instant t) Denote by Xi the set of all jobs that are active during the interval (ti, ti+1] and let ∆i be their total density 161 Schedulability of Sporadic Jobs with EDF Theorem 24 A set of independent preemptable sporadic jobs is schedulable according to EDF if at every time instant t the total density of all jobs active at time t is at most one. Proof. By contradiction, suppose that a job misses its deadline at t, no deadlines missed before t Let t1 be the supremum of time instants before t when either the system idles, or a job with a deadline after t executes Suppose that jobs J1, . . . , Jk execute in [t1, t] and that they are ordered w.r.t. increasing deadline (Jk misses its deadline at t) Let L be the number of releases and completions in [t1, t], denote by ti the i-th time instant when i-th such event occurs (the first one occurs at t1 and we denote by tL+1 the time instant t) Denote by Xi the set of all jobs that are active during the interval (ti, ti+1] and let ∆i be their total density The rest on whiteboard .... 161 Sporadic Jobs with EDF – Example Note that the above theorem includes both the periodic as well as sporadic jobs This test is sufficient but not necessary 162 Sporadic Jobs with EDF – Example Note that the above theorem includes both the periodic as well as sporadic jobs This test is sufficient but not necessary Example 25 Three sporadic jobs: S1(0, 2, 1), S2(0.5, 2.5, 1), S3(1, 3, 1) Total density at time 1.5 is 1.5 Yet, the jobs are schedulable by EDF 162 Admission Control for Sporadic Jobs with EDF Let ∆ be the total density of periodic tasks. 163 Admission Control for Sporadic Jobs with EDF Let ∆ be the total density of periodic tasks. Assume that a new sporadic job S(t, d, e) is released at time t. 163 Admission Control for Sporadic Jobs with EDF Let ∆ be the total density of periodic tasks. Assume that a new sporadic job S(t, d, e) is released at time t. At time t there are n active sporadic jobs in the system 163 Admission Control for Sporadic Jobs with EDF Let ∆ be the total density of periodic tasks. Assume that a new sporadic job S(t, d, e) is released at time t. At time t there are n active sporadic jobs in the system The EDF scheduler maintains a list of the jobs, in non-decreasing order of their deadlines 163 Admission Control for Sporadic Jobs with EDF Let ∆ be the total density of periodic tasks. Assume that a new sporadic job S(t, d, e) is released at time t. At time t there are n active sporadic jobs in the system The EDF scheduler maintains a list of the jobs, in non-decreasing order of their deadlines The deadlines partition the time from t to ∞ into n + 1 discrete intervals I1, I2, . . . , In+1 163 Admission Control for Sporadic Jobs with EDF Let ∆ be the total density of periodic tasks. Assume that a new sporadic job S(t, d, e) is released at time t. At time t there are n active sporadic jobs in the system The EDF scheduler maintains a list of the jobs, in non-decreasing order of their deadlines The deadlines partition the time from t to ∞ into n + 1 discrete intervals I1, I2, . . . , In+1 I1 begins at t and ends at the earliest sporadic job deadline For each 1 ≤ k ≤ n, each Ik+1 begins when the interval Ik ends, and ends at the next deadline in the list (or ∞ for In+1) 163 Admission Control for Sporadic Jobs with EDF Let ∆ be the total density of periodic tasks. Assume that a new sporadic job S(t, d, e) is released at time t. At time t there are n active sporadic jobs in the system The EDF scheduler maintains a list of the jobs, in non-decreasing order of their deadlines The deadlines partition the time from t to ∞ into n + 1 discrete intervals I1, I2, . . . , In+1 I1 begins at t and ends at the earliest sporadic job deadline For each 1 ≤ k ≤ n, each Ik+1 begins when the interval Ik ends, and ends at the next deadline in the list (or ∞ for In+1) The scheduler maintains the total density ∆S,k of sporadic jobs active in each interval Ik 163 Admission Control for Sporadic Jobs with EDF Let ∆ be the total density of periodic tasks. Assume that a new sporadic job S(t, d, e) is released at time t. At time t there are n active sporadic jobs in the system The EDF scheduler maintains a list of the jobs, in non-decreasing order of their deadlines The deadlines partition the time from t to ∞ into n + 1 discrete intervals I1, I2, . . . , In+1 I1 begins at t and ends at the earliest sporadic job deadline For each 1 ≤ k ≤ n, each Ik+1 begins when the interval Ik ends, and ends at the next deadline in the list (or ∞ for In+1) The scheduler maintains the total density ∆S,k of sporadic jobs active in each interval Ik Let I be the interval containing the deadline d of the new sporadic job S(t, d, e) 163 Admission Control for Sporadic Jobs with EDF Let ∆ be the total density of periodic tasks. Assume that a new sporadic job S(t, d, e) is released at time t. At time t there are n active sporadic jobs in the system The EDF scheduler maintains a list of the jobs, in non-decreasing order of their deadlines The deadlines partition the time from t to ∞ into n + 1 discrete intervals I1, I2, . . . , In+1 I1 begins at t and ends at the earliest sporadic job deadline For each 1 ≤ k ≤ n, each Ik+1 begins when the interval Ik ends, and ends at the next deadline in the list (or ∞ for In+1) The scheduler maintains the total density ∆S,k of sporadic jobs active in each interval Ik Let I be the interval containing the deadline d of the new sporadic job S(t, d, e) The scheduler accepts the job if e/(d − t) + ∆S,k ≤ 1 − ∆ for all k = 1, 2, . . . , 163 Admission Control for Sporadic Jobs with EDF Let ∆ be the total density of periodic tasks. Assume that a new sporadic job S(t, d, e) is released at time t. At time t there are n active sporadic jobs in the system The EDF scheduler maintains a list of the jobs, in non-decreasing order of their deadlines The deadlines partition the time from t to ∞ into n + 1 discrete intervals I1, I2, . . . , In+1 I1 begins at t and ends at the earliest sporadic job deadline For each 1 ≤ k ≤ n, each Ik+1 begins when the interval Ik ends, and ends at the next deadline in the list (or ∞ for In+1) The scheduler maintains the total density ∆S,k of sporadic jobs active in each interval Ik Let I be the interval containing the deadline d of the new sporadic job S(t, d, e) The scheduler accepts the job if e/(d − t) + ∆S,k ≤ 1 − ∆ for all k = 1, 2, . . . , i.e. accept if the new sporadic job can be added, without increasing density of any intervals past 1 163 164 Admission Control for Sporadic Jobs with EDF This acceptance test is not optimal: a sporadic job may be rejected even though it could be scheduled. 165 Admission Control for Sporadic Jobs with EDF This acceptance test is not optimal: a sporadic job may be rejected even though it could be scheduled. The test is based on the density and hence is sufficient but not necessary. It is possible to derive a – much more complex – expression for schedulability which takes into account slack time, and is optimal. Unclear if the optimality is worth the complexity. 165 Sporadic Jobs with EDF One way to schedule sporadic jobs in a fixed-priority system is to use a sporadic server to execute them 166 Sporadic Jobs with EDF One way to schedule sporadic jobs in a fixed-priority system is to use a sporadic server to execute them Because the server (pS, eS) has eS units of processor time every pS units of time, the scheduler can compute the least amount of time available to every sporadic job in the system 166 Sporadic Jobs with EDF One way to schedule sporadic jobs in a fixed-priority system is to use a sporadic server to execute them Because the server (pS, eS) has eS units of processor time every pS units of time, the scheduler can compute the least amount of time available to every sporadic job in the system Assume that sporadic jobs are ordered among themselves according to EDF 166 Sporadic Jobs with EDF One way to schedule sporadic jobs in a fixed-priority system is to use a sporadic server to execute them Because the server (pS, eS) has eS units of processor time every pS units of time, the scheduler can compute the least amount of time available to every sporadic job in the system Assume that sporadic jobs are ordered among themselves according to EDF When first sporadic job S1(t, dS,1, eS,1) arrives, there is at least (dS,1 − t)/pS eS units of processor time available to the server before the deadline of the job 166 Sporadic Jobs with EDF One way to schedule sporadic jobs in a fixed-priority system is to use a sporadic server to execute them Because the server (pS, eS) has eS units of processor time every pS units of time, the scheduler can compute the least amount of time available to every sporadic job in the system Assume that sporadic jobs are ordered among themselves according to EDF When first sporadic job S1(t, dS,1, eS,1) arrives, there is at least (dS,1 − t)/pS eS units of processor time available to the server before the deadline of the job Therefore it accepts S1 if the slack of the job σS,1(t) = (dS,1 − t)/pS eS − eS,1 ≥ 0 166 Sporadic Jobs with EDF To decide if a new job Si(t, dS,i, eS,i) is acceptable when there are n sporadic jobs in the system, the scheduler first computes the slack σS,i(t) of Si: σS,i(t) = (dS,i − t)/pS eS − eS,i − dS,k 0. 216 Individual Jobs – Speedup Helps(?) Theorem 32 If a set of jobs is feasible on m identical processors, then the same set of jobs will be scheduled to meet all deadlines by EDF on identical processors in which the individual processors are (2 − 1 m ) times as fast as in the original system. The result is tight for EDF (assuming dynamic job priority): Theorem 33 There are sets of jobs that can be feasibly scheduled on m identical processors but EDF cannot schedule them on m processors that are only (2 − 1 m − ε) faster for every ε > 0. ... there are also general lower bounds for online algorithms: Theorem 34 There are sets of jobs that can be feasibly scheduled on m (here m is even) identical processors but no online algorithm can schedule them on m processors that are only (1 + ε) faster for every ε < 1 5 . [Optimal Time-Critical Scheduling Via Resource Augmentation, Phillips et al, STOC 1997] 216 Reactive Systems Consider fixed number, n, of independent periodic tasks T = {T1, . . . , Tn} i.e. there is no dependency relation among jobs Unless otherwise stated, assume no phase and deadlines equal to periods Ignore aperiodic tasks No sporadic tasks unless otherwise stated 217 Reactive Systems Consider fixed number, n, of independent periodic tasks T = {T1, . . . , Tn} i.e. there is no dependency relation among jobs Unless otherwise stated, assume no phase and deadlines equal to periods Ignore aperiodic tasks No sporadic tasks unless otherwise stated Utilization ui of a periodic task Ti with period pi and execution time ei is defined by ui := ei/pi ui is the fraction of time a periodic task with period pi and execution time ei keeps a processor busy 217 Reactive Systems Consider fixed number, n, of independent periodic tasks T = {T1, . . . , Tn} i.e. there is no dependency relation among jobs Unless otherwise stated, assume no phase and deadlines equal to periods Ignore aperiodic tasks No sporadic tasks unless otherwise stated Utilization ui of a periodic task Ti with period pi and execution time ei is defined by ui := ei/pi ui is the fraction of time a periodic task with period pi and execution time ei keeps a processor busy Total utilization UT of a set of tasks T = {T1, . . . , Tn} is defined as the sum of utilizations of all tasks of T , i.e. by UT := n i=1 ui 217 Reactive Systems Consider fixed number, n, of independent periodic tasks T = {T1, . . . , Tn} i.e. there is no dependency relation among jobs Unless otherwise stated, assume no phase and deadlines equal to periods Ignore aperiodic tasks No sporadic tasks unless otherwise stated Utilization ui of a periodic task Ti with period pi and execution time ei is defined by ui := ei/pi ui is the fraction of time a periodic task with period pi and execution time ei keeps a processor busy Total utilization UT of a set of tasks T = {T1, . . . , Tn} is defined as the sum of utilizations of all tasks of T , i.e. by UT := n i=1 ui Given a scheduling algorithm ALG, the schedulable utilization UALG of ALG is the maximum number U such that for all T : UT ≤ U implies T is schedulable by ALG. 217 Multiprocessor Scheduling Taxonomy Allocation (migration type) No migration: each task is allocated to a processor (Task-level migration: jobs of a task may execute on different processors; however, each job is assigned to a single processor) Job-level migration: A single job can migrate and execute on different processors (however, parallel execution of one job is not permitted and migration takes place only when the job is rescheduled) 218 Multiprocessor Scheduling Taxonomy Allocation (migration type) No migration: each task is allocated to a processor (Task-level migration: jobs of a task may execute on different processors; however, each job is assigned to a single processor) Job-level migration: A single job can migrate and execute on different processors (however, parallel execution of one job is not permitted and migration takes place only when the job is rescheduled) Priority type Fixed task-level priority (e.g. RM) Fixed job-level priority (e.g. EDF) (Dynamic job-level priority) 218 Multiprocessor Scheduling Taxonomy Allocation (migration type) No migration: each task is allocated to a processor (Task-level migration: jobs of a task may execute on different processors; however, each job is assigned to a single processor) Job-level migration: A single job can migrate and execute on different processors (however, parallel execution of one job is not permitted and migration takes place only when the job is rescheduled) Priority type Fixed task-level priority (e.g. RM) Fixed job-level priority (e.g. EDF) (Dynamic job-level priority) Partitioned scheduling = No migration Global scheduling = job-level migration 218 Fundamental Limit – Fixed Job-Level Priority Consider m processors and m + 1 tasks T = {T1, . . . , Tm+1}, each Ti = (2L − 1, L). 219 Fundamental Limit – Fixed Job-Level Priority Consider m processors and m + 1 tasks T = {T1, . . . , Tm+1}, each Ti = (2L − 1, L). Then UT = m+1 i=1 L/(2L−1) = (m+1) (L/(2L − 1)) = ((m+1)/2)·(L/(L−1 2 )) For very large L, this number is close to (m + 1)/2. 219 Fundamental Limit – Fixed Job-Level Priority Consider m processors and m + 1 tasks T = {T1, . . . , Tm+1}, each Ti = (2L − 1, L). Then UT = m+1 i=1 L/(2L−1) = (m+1) (L/(2L − 1)) = ((m+1)/2)·(L/(L−1 2 )) For very large L, this number is close to (m + 1)/2. The set T is not schedulable using any fixed job-level priority algorithm. In other words, the schedulable utilization of fixed job-level priority algorithms is at most (m + 1)/2, i.e., half of the processors capacity. 219 Fundamental Limit – Fixed Job-Level Priority Consider m processors and m + 1 tasks T = {T1, . . . , Tm+1}, each Ti = (2L − 1, L). Then UT = m+1 i=1 L/(2L−1) = (m+1) (L/(2L − 1)) = ((m+1)/2)·(L/(L−1 2 )) For very large L, this number is close to (m + 1)/2. The set T is not schedulable using any fixed job-level priority algorithm. In other words, the schedulable utilization of fixed job-level priority algorithms is at most (m + 1)/2, i.e., half of the processors capacity. There are variants of EDF achieving this bound (see later slides). 219 Partitioned vs Global Scheduling Most algorithms up to the end of 1990s based on partitioned scheduling no migration From the end of 1990s, many results concerning global scheduling job-level migration The task-level migration has not been much studied, so it is not covered in this lecture. We consider fixed job-level priority (e.g., EDF) and fixed task-level priority (e.g., RM). As before, we ignore dynamic job-level priority. 220 Partitioned Scheduling & Fixed Job-Level Priority The algorithm proceeds in two phases: 1. Allocate tasks to processors, i.e., partition the set of tasks into m possibly empty modules M1, . . . , Mm 2. Schedule tasks of each Mi on the processor i according to a given single processor algorithm The quality of task assignment is determined by the number of assigned processors 221 Partitioned Scheduling & Fixed Job-Level Priority The algorithm proceeds in two phases: 1. Allocate tasks to processors, i.e., partition the set of tasks into m possibly empty modules M1, . . . , Mm 2. Schedule tasks of each Mi on the processor i according to a given single processor algorithm The quality of task assignment is determined by the number of assigned processors Use EDF to schedule modules Suffices to test whether the total utilization of each module is ≤ 1 (or, possibly, ≤ ˆU where ˆU < 1 in order to accomodate aperiodic jobs ...) 221 Partitioned Scheduling & Fixed Job-Level Priority The algorithm proceeds in two phases: 1. Allocate tasks to processors, i.e., partition the set of tasks into m possibly empty modules M1, . . . , Mm 2. Schedule tasks of each Mi on the processor i according to a given single processor algorithm The quality of task assignment is determined by the number of assigned processors Use EDF to schedule modules Suffices to test whether the total utilization of each module is ≤ 1 (or, possibly, ≤ ˆU where ˆU < 1 in order to accomodate aperiodic jobs ...) A simple uniform-size bin-packing problem is polynomially reducible to finding an optimal schedule. So the latter is NP-hard. Similarly, we may use RM for fixed task-level priorities (total utilization in modules ≤ log 2, etc.) 221 Global Scheduling All ready jobs are kept in a global queue When selected for execution, a job can be assigned to any processor When preempted, a job goes to the global queue (i.e., forgets on which processor it executed) 222 Global Scheduling – Fixed Job-Level Priority Dhall’s effect: Consider m > 1 processors Let ε > 0 Consider a set of tasks T = {T1, . . . , Tm, Tm+1} such that Ti = (1, 2ε) for 1 ≤ i ≤ m Tm+1 = (1 + ε, 1) 223 Global Scheduling – Fixed Job-Level Priority Dhall’s effect: Consider m > 1 processors Let ε > 0 Consider a set of tasks T = {T1, . . . , Tm, Tm+1} such that Ti = (1, 2ε) for 1 ≤ i ≤ m Tm+1 = (1 + ε, 1) T is schedulable Stadnard EDF and RM schedules are not feasible (whiteb.) 223 Global Scheduling – Fixed Job-Level Priority Dhall’s effect: Consider m > 1 processors Let ε > 0 Consider a set of tasks T = {T1, . . . , Tm, Tm+1} such that Ti = (1, 2ε) for 1 ≤ i ≤ m Tm+1 = (1 + ε, 1) T is schedulable Stadnard EDF and RM schedules are not feasible (whiteb.) However, UT = m 2ε 1 + 1 1 + ε which means that for small ε the utilization UT is close to 1 (i.e., UT /m is very small for m >> 0 processors) Question: What is the maximum schedulable utilization of EDF on m processors? 223 How to avoid Dhall’s effect? Note that RM and EDF only account for task periods and ignore the execution time! 224 How to avoid Dhall’s effect? Note that RM and EDF only account for task periods and ignore the execution time! (Partial) Solution: Dhall’s effect can be avoided by giving high priority to tasks with high utilization Then in the previous example, Tm+1 is executed whenever it comes and the other tasks are assigned to the remaining processors – produces a feasible schedule 224 Global Scheduling – Fixed Job-Level Priority Apparently there is a problem with long jobs due to Dhall’s effect. There is an improved version of EDF called EDF-US(1/2) which assigns the highest priority to tasks with ui ≥ 1/2 assigns priorities to the rest according to deadlines which reaches the generic schedulable utilization bound (m + 1)/2. 225 Partitioned vs Global Advantages of the global scheduling: Load is automatically balanced Better average response time (follows from queueing theory) Disadvantages of the global scheduling: Problems caused by migration (e.g. increased cache misses) Schedulability tests more difficult (active area of research) Is either of the approaches better from the schedulability standpoint? 226 Global Beats Partitioned There are sets of tasks schedulable only with global scheduler: T = {T1, T2, T3} where T1 = (2, 1), T2 = (3, 2), T3 = (3, 2), can be scheduled using a global scheduler: No feasible partitioned schedule exists, always at least one processor gets tasks with total utilization higher than 1. 227 Partitioned Beats Global There are task sets that can be scheduled only with partitioned scheduler (assuming fixed task-level priority assignment): T = {T1, . . . , T4} where T1 = (3, 2), T2 = (4, 3), T3 = (15, 5), T4 = (20, 5), can be scheduled using a fixed task-level priority partitioned schedule: Global scheduling (fixed job-level priority): There are 9 jobs released in the interval [0, 12). Any of the 9! possible priority assignments leads to a deadline miss. 228 Optimal Algorithm? There IS an optimal algorithm in the case of job-level migration & dynamic job-level priority. However, the algorithm is clock driven. The priority fair (PFair) algorithm is optimal for periodic systems with deadlines equal to periods 229 Optimal Algorithm? There IS an optimal algorithm in the case of job-level migration & dynamic job-level priority. However, the algorithm is clock driven. The priority fair (PFair) algorithm is optimal for periodic systems with deadlines equal to periods Idea (of PFair): In any interval (0, t] jobs of a task Ti with utilization ui execute for amount of time W so that uit − 1 < W < uit + 1 (Here every parameter is assumed to be a natural number) This is achieved by cutting time into small quanta and scheduling jobs in these quanta so that the execution times are always (more or less) in proportion. 229 Optimal Algorithm? There IS an optimal algorithm in the case of job-level migration & dynamic job-level priority. However, the algorithm is clock driven. The priority fair (PFair) algorithm is optimal for periodic systems with deadlines equal to periods Idea (of PFair): In any interval (0, t] jobs of a task Ti with utilization ui execute for amount of time W so that uit − 1 < W < uit + 1 (Here every parameter is assumed to be a natural number) This is achieved by cutting time into small quanta and scheduling jobs in these quanta so that the execution times are always (more or less) in proportion. There are other optimal algorithms, all of them suffer from a large number of preemptions/migrations. No optimal algorithms are known for more general settings: deadlines bounded by periods, arbitrary deadlines. Recall, that no optimal on-line scheduling possible 229 Real-Time Scheduling Scheduling of Reactive Systems Clock-Driven Scheduling 230 Current Assumptions Fixed number, n, of periodic tasks T1, . . . , Tn 231 Current Assumptions Fixed number, n, of periodic tasks T1, . . . , Tn Parameters of periodic tasks are known a priori Execution time ei,k of each job Ji,k in a task Ti is fixed For a job Ji,k in a task Ti we have ri,1 = ϕi = 0 (i.e., synchronized) ri,k = ri,k−1 + pi 231 Current Assumptions Fixed number, n, of periodic tasks T1, . . . , Tn Parameters of periodic tasks are known a priori Execution time ei,k of each job Ji,k in a task Ti is fixed For a job Ji,k in a task Ti we have ri,1 = ϕi = 0 (i.e., synchronized) ri,k = ri,k−1 + pi We allow aperiodic tasks assume that the system maintains a single queue for jobs of aperiodic tasks Whenever the processor is available for aperiodic tasks, the job at the head of this queue is executed We treat sporadic tasks later Abuse of notation: Periodic, aperiodic, sporadic jobs are jobs of periodic, aperiodic, sporadic tasks, respectively. 231 Static, Clock-Driven Scheduler Construct a static schedule offline The schedule specifies exactly when each job executes The amount of time allocated to every job is equal to its execution time The schedule repeats each hyperperiod i.e. it suffices to compute the schedule up to hyperperiod 232 Static, Clock-Driven Scheduler Construct a static schedule offline The schedule specifies exactly when each job executes The amount of time allocated to every job is equal to its execution time The schedule repeats each hyperperiod i.e. it suffices to compute the schedule up to hyperperiod Can use complex algorithms offline Runtime of the scheduling algorithm is not relevant Can compute a schedule that optimizes some characteristics of the system e.g. a schedule where the idle periods are nearly periodic (useful to accommodate aperiodic jobs) 232 Example T1 = (4, 1), T2 = (5, 1.8), T3 = (20, 1), T4 = (20, 2) Hyperperiod H = 20 0 4 8 12 16 20 24 Aper. T1 T2 T3 T4 233 Implementation of Static Scheduler Store pre-computed schedule as a table Each entry (tk , T(tk )) gives a decision time tk scheduling decision T(tk ) which is either a task to be executed, or idle (denoted by I) 234 Implementation of Static Scheduler Store pre-computed schedule as a table Each entry (tk , T(tk )) gives a decision time tk scheduling decision T(tk ) which is either a task to be executed, or idle (denoted by I) The system creates all tasks that are to be executed: Allocates memory for the code and data Brings the code into memory 234 Implementation of Static Scheduler Store pre-computed schedule as a table Each entry (tk , T(tk )) gives a decision time tk scheduling decision T(tk ) which is either a task to be executed, or idle (denoted by I) The system creates all tasks that are to be executed: Allocates memory for the code and data Brings the code into memory Scheduler sets the hardware timer to interrupt at the first decision time t0 = 0 234 Implementation of Static Scheduler Store pre-computed schedule as a table Each entry (tk , T(tk )) gives a decision time tk scheduling decision T(tk ) which is either a task to be executed, or idle (denoted by I) The system creates all tasks that are to be executed: Allocates memory for the code and data Brings the code into memory Scheduler sets the hardware timer to interrupt at the first decision time t0 = 0 On receipt of an interrupt at tk : Scheduler sets the timer interrupt to tk+1 If previous task overrunning, handle failure If T(tk ) = I and aperiodic job waiting, start executing it Otherwise, start executing the next job in T(tk ) 234 Example T1 = (4, 1), T2 = (5, 1.8), T3 = (20, 1), T4 = (20, 2) Hyperperiod H = 20 0 4 8 12 16 20 24 Aper. T1 T2 T3 T4 tk 0.0 1.0 2.0 3.8 4.0 5.0 6.0 · · · T(tk ) T1 T3 T2 I T1 I T4 · · · 235 Frame Based Scheduling Arbitrary table-driven cyclic schedules flexible, but inefficient Relies on accurate timer interrupts, based on execution times of tasks High scheduling overhead 236 Frame Based Scheduling Arbitrary table-driven cyclic schedules flexible, but inefficient Relies on accurate timer interrupts, based on execution times of tasks High scheduling overhead Easier to implement if a structure is imposed Make scheduling decisions at periodic intervals (frames) of length f Execute a fixed list of jobs within each frame; no preemption within frames 236 Frame Based Scheduling Arbitrary table-driven cyclic schedules flexible, but inefficient Relies on accurate timer interrupts, based on execution times of tasks High scheduling overhead Easier to implement if a structure is imposed Make scheduling decisions at periodic intervals (frames) of length f Execute a fixed list of jobs within each frame; no preemption within frames Gives two benefits: Scheduler can easily check for overruns and missed deadlines at the end of each frame. Can use a periodic clock interrupt, rather than programmable timer. 236 Frame Based Scheduling – Cyclic Executive Modify previous table-driven scheduler to be frame based Table that drives the scheduler has F entries, where F = H/f The k-th entry L(k) lists the names of the jobs that are to be scheduled in frame k (L(k) is called scheduling block) Each job is implemented by a procedure 237 Frame Based Scheduling – Cyclic Executive Modify previous table-driven scheduler to be frame based Table that drives the scheduler has F entries, where F = H/f The k-th entry L(k) lists the names of the jobs that are to be scheduled in frame k (L(k) is called scheduling block) Each job is implemented by a procedure Cyclic executive executed by the clock interrupt that signals the start of a frame: If an aperiodic job is executing, preempts it; if a periodic overruns, handles the overrun Determines the appropriate scheduling block for this frame Executes the jobs in the scheduling block Executes jobs from the head of the aperiodic job queue for the remainder of the frame 237 Frame Based Scheduling – Cyclic Executive Modify previous table-driven scheduler to be frame based Table that drives the scheduler has F entries, where F = H/f The k-th entry L(k) lists the names of the jobs that are to be scheduled in frame k (L(k) is called scheduling block) Each job is implemented by a procedure Cyclic executive executed by the clock interrupt that signals the start of a frame: If an aperiodic job is executing, preempts it; if a periodic overruns, handles the overrun Determines the appropriate scheduling block for this frame Executes the jobs in the scheduling block Executes jobs from the head of the aperiodic job queue for the remainder of the frame Less overhead than pure table driven cyclic scheduler, since only interrupted on frame boundaries, rather than on each job 237 Frame Based Scheduling – Frame Size How to choose the frame length? (Assume that periods are in N and choose frame sizes in N.) 1. Necessary condition for avoiding preemption of jobs is f ≥ max i ei (i.e. we want each job to have a chance to finish within a frame) 238 Frame Based Scheduling – Frame Size How to choose the frame length? (Assume that periods are in N and choose frame sizes in N.) 1. Necessary condition for avoiding preemption of jobs is f ≥ max i ei (i.e. we want each job to have a chance to finish within a frame) 2. To minimize the number of entries in the cyclic schedule, the hyper-period should be an integer multiple of the frame size, i.e. ∃i : pi mod f = 0 238 Frame Based Scheduling – Frame Size How to choose the frame length? (Assume that periods are in N and choose frame sizes in N.) 1. Necessary condition for avoiding preemption of jobs is f ≥ max i ei (i.e. we want each job to have a chance to finish within a frame) 2. To minimize the number of entries in the cyclic schedule, the hyper-period should be an integer multiple of the frame size, i.e. ∃i : pi mod f = 0 3. To allow scheduler to check that jobs complete by their deadline, at least one frame should lie between release time of a job and its deadline, which is equivalent to ∀i : 2 ∗ f − gcd(pi, f) ≤ Di All three constraints should be satisfied. 238 Frame Based Scheduling – Frame Size – Example 1. f ≥ maxi ei 2. ∃i : pi mod f = 0 3. ∀i : 2 ∗ f − gcd(pi, f) ≤ Di Example 35 T1 = (4, 1.0), T2 = (5, 1.8), T3 = (20, 1.0), T4 = (20, 2.0) Then f ∈ N satisfies 1.–3. iff f = 2. With f = 2 is schedulable: 239 Frame Based Scheduling – Job Slices Sometimes a system cannot meet all three frame size constraints simultaneously (and even if it meets the constraints, no non-preemptive schedule is feasible) 240 Frame Based Scheduling – Job Slices Sometimes a system cannot meet all three frame size constraints simultaneously (and even if it meets the constraints, no non-preemptive schedule is feasible) Can be solved by partitioning a job with large execution time into slices with shorter execution times This, in effect, allows preemption of the large job 240 Frame Based Scheduling – Job Slices Sometimes a system cannot meet all three frame size constraints simultaneously (and even if it meets the constraints, no non-preemptive schedule is feasible) Can be solved by partitioning a job with large execution time into slices with shorter execution times This, in effect, allows preemption of the large job Consider T1 = (4, 1), T2 = (5, 2, 7), T3 = (20, 5) Cannot satisfy constraints: 1. ⇒ f ≥ 5 but 3. ⇒ f ≤ 4 Solve by splitting T3 into T3,1 = (20, 1), T3,2 = (20, 3), and T3,3 = (20, 1) (Other splits exist) Result can be scheduled with f = 4 240 Building a Structured Cyclic Schedule To construct a schedule, we have to make three kinds of design decisions (that cannot be taken independently): Choose a frame size based on constraints Partition jobs into slices Place slices into frames There are efficient algorithms for solving these problems based e.g. on a reduction to the network flow problem. 241 Scheduling Aperiodic Jobs So far, aperiodic jobs scheduled in the background after all jobs with hard deadlines This may unnecessarily delay aperiodic jobs 242 Scheduling Aperiodic Jobs So far, aperiodic jobs scheduled in the background after all jobs with hard deadlines This may unnecessarily delay aperiodic jobs Note: There is no advantage in completing periodic jobs early Ideally, finish periodic jobs by their respective deadlines. 242 Scheduling Aperiodic Jobs So far, aperiodic jobs scheduled in the background after all jobs with hard deadlines This may unnecessarily delay aperiodic jobs Note: There is no advantage in completing periodic jobs early Ideally, finish periodic jobs by their respective deadlines. Slack Stealing: Slack time in a frame = the time left in the frame after all (remaining) slices execute Schedule aperiodic jobs ahead of periodic in the slack time of periodic jobs The cyclic executive keeps track of the slack time left in each frame as the aperiodic jobs execute, preempts them with periodic jobs when there is no more slack As long as there is slack remaining in a frame and the aperiodic jobs queue is non-empty, the executive executes aperiodic jobs, otherwise executes periodic Reduces resp. time for aper. jobs, but requires accurate timers 242 Example Assume that the aperiodic queue is never empty. Aperiodic at the ends of frames: 0 4 8 12 16 20 24 Aper. T1 T2 T3 T4 Slack stealing: 0 4 8 12 16 20 24 Aper. T1 T2 T3 T4 243 Frame Based Scheduling – Sporadic Jobs Let us allow sporadic jobs i.e. hard real-time jobs whose release and exec. times are not known a priori 244 Frame Based Scheduling – Sporadic Jobs Let us allow sporadic jobs i.e. hard real-time jobs whose release and exec. times are not known a priori The scheduler determines whether to accept a sporadic job when it arrives (and its parameters become known) Perform acceptance test to check whether the new sporadic job can be feasibly scheduled with all the jobs (periodic and sporadic) in the system at that time Acceptance check done at the beginning of the next frame; has to keep execution times of the parts of sporadic jobs that have already executed If there is sufficient slack time in the frames before the new job’s deadline, the new sporadic job is accepted; otherwise, rejected Among themselves, sporadic jobs scheduled according to EDF This is optimal for sporadic jobs 244 Frame Based Scheduling – Sporadic Jobs Let us allow sporadic jobs i.e. hard real-time jobs whose release and exec. times are not known a priori The scheduler determines whether to accept a sporadic job when it arrives (and its parameters become known) Perform acceptance test to check whether the new sporadic job can be feasibly scheduled with all the jobs (periodic and sporadic) in the system at that time Acceptance check done at the beginning of the next frame; has to keep execution times of the parts of sporadic jobs that have already executed If there is sufficient slack time in the frames before the new job’s deadline, the new sporadic job is accepted; otherwise, rejected Among themselves, sporadic jobs scheduled according to EDF This is optimal for sporadic jobs Note: rejection is often better than missing deadline e.g. a robotic arm taking defective parts off a conveyor belt: if the arm cannot meet deadline, the belt may be slowed down or stopped 244 S1(17, 4.5) released at 3 with abs. deadline 17 and execution time 4.5; acceptance test at 4; must be scheduled in frames 2, 3, 4; total slack in these frames is 4, i.e. rejected 245 S1(17, 4.5) released at 3 with abs. deadline 17 and execution time 4.5; acceptance test at 4; must be scheduled in frames 2, 3, 4; total slack in these frames is 4, i.e. rejected S2(29, 4) released at 5 with abs. deadline 29 and exec. time 4; acc. test at 8; total slack in frames 3-7 is 5.5, i.e. accepted 245 S1(17, 4.5) released at 3 with abs. deadline 17 and execution time 4.5; acceptance test at 4; must be scheduled in frames 2, 3, 4; total slack in these frames is 4, i.e. rejected S2(29, 4) released at 5 with abs. deadline 29 and exec. time 4; acc. test at 8; total slack in frames 3-7 is 5.5, i.e. accepted S3(22, 1.5) released at 11 with abs. deadline 22 and exec. time 1.5; acc. test at 12; 2 units of slack in frames 4, 5 as S3 will be executed ahead of the remaining parts of S2 by EDF – check whether there will be enough slack for the remaining parts of S2, accepted 245 S1(17, 4.5) released at 3 with abs. deadline 17 and execution time 4.5; acceptance test at 4; must be scheduled in frames 2, 3, 4; total slack in these frames is 4, i.e. rejected S2(29, 4) released at 5 with abs. deadline 29 and exec. time 4; acc. test at 8; total slack in frames 3-7 is 5.5, i.e. accepted S3(22, 1.5) released at 11 with abs. deadline 22 and exec. time 1.5; acc. test at 12; 2 units of slack in frames 4, 5 as S3 will be executed ahead of the remaining parts of S2 by EDF – check whether there will be enough slack for the remaining parts of S2, accepted S4(44, 5.0) is rejected (only 4.5 slack left) 245 Handling Overruns Overruns may happen due to failures e.g. unexpectedly large data over which the system operates, hardware failures, etc. 246 Handling Overruns Overruns may happen due to failures e.g. unexpectedly large data over which the system operates, hardware failures, etc. Ways to handle overruns: Abort the overrun job at the beginning of the next frame; log the failure; recover later e.g. control law computation of a robust digital controller Preempt the overrun job and finish it as an aperiodic job use this when aborting job would cause “costly” inconsistencies Let the overrun job finish – start of the next frame and the execution jobs scheduled for this frame are delayed This may cause other jobs to be delayed depends on application 246 Clock-drive Scheduling: Conclusions Advantages: Conceptual simplicity Complex dependencies, communication delays, and resource contention among jobs can be considered when constructing the static schedule Entire schedule in a static table No concurrency control or synchronization needed Easy to validate, test and certify 247 Clock-drive Scheduling: Conclusions Advantages: Conceptual simplicity Complex dependencies, communication delays, and resource contention among jobs can be considered when constructing the static schedule Entire schedule in a static table No concurrency control or synchronization needed Easy to validate, test and certify Disadvantages: Inflexible If any parameter changes, the schedule must be usually recomputed Best suited for systems which are rarely modified (e.g. controllers) Parameters of the jobs must be fixed As opposed to most priority-driven schedulers 247 Real-Time Programming & RTOS Concurrent and real-time programming tools 248 Concurrent Programming Concurrency in real-time systems typical architecture of embedded real-time system: several input units computation output units data logging/storing i.e., handling several concurrent activities concurrency occurs naturally in real-time systems Support for concurrency in programming languages (Java, Ada, ...) advantages: readability, OS independence, checking of interactions by compiler, embedded computer may not have an OS Support by libraries and the operating system (C/C++ with POSIX) advantages: multi-language composition, language’s model of concurrency may be difficult to implement on top of OS, OS API stadards imply portability 249 Processes and Threads Process running instance of a program, executes its own virtual machine to avoid interference from other processes, contains information about program resources and execution state, e.g.: environment, working directory, ... program instructions, registers, heap, stack, file descriptors, signal actions, inter-process communication tools (pipes, message boxes, etc.) Thread exists within a process, uses process resources , can be scheduled by OS and run as an independent entity, keeps its own: execution stack, local data, etc. share global data and resources with other threads of the same process 250 Processes and threads in UNIX 251 Communication and Synchronization Communication passing of information from one process (thread) to another typical methods: shared variables, message passing Synchronization satisfaction of constraints on the interleaving of actions of processes e.g., action of one process has to occur after an action of another one typical methods: semaphores, monitors Communication and synchronization are linked: communication requires synchronization synchronization corresponds to communication without content 252 Concurrent Programming is Complicated Multi-threaded applications with shared data may have numerous flaws. Race condition Two or more threads try to access the same shared data, the result depends on the exact order in which their instructions are executed Deadlock occurs when two or more threads wait for each other, forming a cycle and preventing all of them from making any forward progress Starvation an idefinite delay or permanent blocking of one or more runnable threads in a multithreaded application Livelock occurs when threads are scheduled but are not making forward progress because they are continuously reacting to each other’s state changes Usually difficult to find bugs and verify correctness. 253 Real-Time Aspects time-aware systems make explicit references to the time frame of the enclosing environment e.g., a bank safe’s door are to be locked from midnight to nine o’clock the "real-time" of the environment must be available reactive systems are typically concerned with relative times an output has to be produced within 50 ms of an associated input must be able to measure intervals usually must synchronize with environment: input sampling and output signalling must be done very regularly with controlled variability 254 The Concept of Time Real-time systems must have a concept of time – but what is time? Measure of a time interval Units? seconds, milliseconds, cpu cycles, system "ticks" Granularity, accuracy, stability of the clock source Is "one second" a well defined measure? "A second is the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom." ... temperature dependencies and relativistic effects Skew and divergence among multiple clocks Distributed systems and clock synchronization Measuring time external source (GPS, NTP, etc.) internal – hardware clocks that count the number of oscillations that occur in a quartz crystal 255 Requirements for Interaction with "time" For RT programming, it is desirable to have: access to clocks and a representation of time delays timeouts (deadline specification and real-time scheduling) 256 Access to Clock and Representation of Time requires a hardware clock that can be read like a regular external device mostly offered by an OS service, if direct interfacing to the hardware is not allowed Example of time representation: (POSIX high resolution clock, counting seconds and nanoseconds since 1970 with known resolution) struct timespec { time_t tv_sec; long tv_nsec; } int clock_gettime( clockid_t clock_id, struct timespec * tp ); int clock_settime( clockid_t id, const struct timespec * tp ); Time is often kept by incrementing an integer variable, need to take case of overflows (i.e., jumps to the past). 257 Delays In addition to having access to a clock, need ability to Delay execution until an arbitrary calendar time What about daylight saving time changes? Problems with leap seconds. Delay execution for a relative period of time Delay for t seconds Delay for t seconds after event e begins The overshoots accumulate = the drift! 258 Timers Set an alarm clock, do some work, and then wait for whatever time is left before the alarm rings This is done with timers Thread is told to wait until the next ring Two types of timers one-shot After a specified interval call an associated function. periodic (also called auto-reload timer in freeRTOS) Call the associated function repeatedly, always after the specified interval. Even with periodic timers, the overshoots occur but they do not accumulate (local drift) 259 Timeouts Synchronous blocking operations can include timeouts Synchronization primitives Semaphores, locks, etc. ... timeout usually generates an error/exception Networking and other I/O calls E.g. select() in POSIX Monitors readiness of multiple file descriptors, is ready when the corresponding operation with the file desc is possible without blocking. Has a timeout argument that specifies the minimum interval that select() should block waiting for a file descriptor to become ready. May also provide an asynchronous timeout signal Detect time overruns during execution of periodic and sporadic tasks 260 Deadline specification and real-time scheduling Clock driven scheduling trivial to implement via cyclic executive. Other scheduling algorithms need OS and/or language support: System calls create, destroy, suspend and resume tasks. Implement tasks as either threads or processes. Threads usually more beneficial than processes (with separate address space and memory protection): Processes not always supported by the hardware Processes have longer context switch time Threads can communicate using shared data (fast and more predictable) Scheduling support: Preemptive scheduler with multiple priority levels Support for aperiodic tasks (at least background scheduling) Support for sporadic tasks with acceptance tests, etc. 261 Jobs, Tasks and Threads In theory, a system comprises a set of (abstract) tasks, each task is a series of jobs tasks are typed, have various parameters, react to events, etc. Acceptance test performed before admitting new tasks In practice, a thread (or a process) is the basic unit of work handled by the scheduler Threads are the instantiation of tasks that have been admitted to the system How to map tasks to threads? 262 Periodic Tasks Real-time tasks defined to execute periodically T = (φ, p, e, D) It is clearly inefficient if the thread is created and destroyed repeatedly every period Some op. systems (funkOS) and programming languages (Real-time Java & Ada) support periodic threads the kernel (or VM) reinitializes such a thread and puts it to sleep when the thread completes The kernel releases the thread at the beginning of the next period This provides clean abstraction but needs support from OS Thread instantiated once, performs job, sleeps until next period, repeats Lower overhead, but relies on programmer to handle timing Hard to avoid timing drift due to sleep overuns (see the discussion of delays earlier in this lecture) Most common approach 263 Sporadic and Aperiodic Tasks Events trigger sporadic and aperiodic tasks Might be extenal (hardware) interrupts Might be signalled by another task Usual implementation: OS executes periodic server thread (background server, deferrable server, etc.) OS maintains a “server queue” = a list of pointers which give starting addresses of functions to be executed by the server Upon the occurrence of an event that releases an aperiodic or sporadic job, the event handler (usually an interrupt routine) inserts a pointer to the corresponding function to the list 264 Real-Time Programming & RTOS Real-Time Operating systems 265 Operating Systems – What You Should Know ... An operating system is a collection of software that manages computer hardware resources and provides common services for computer programs. Basic components multi-purpose OS: Program execution & process management processes (threads), IPC, scheduling, ... Memory management segmentation, paging, protection ... Storage & other I/O management files systems, device drivers, ... Network management network drivers, protocols, ... Security user IDs, privileges, ... User interface shell, GUI, ... 266 Operating Systems – What You Should Know ... 267 Implementing Real-Time Systems Key fact from scheduler theory: need predictable behavior Raw performance less critical than consistent and predictable performance; hence focus on scheduling algorithms, schedulability tests Don’t want to fairly share resources – be unfair to ensure deadlines met Need to run on a wide range of – often custom – hardware Often resource constrained: limited memory, CPU, power consumption, size, weight, budget Closed set of applications (Do we need a cardiac pacemaker playing music?) Strong reliability requirements – may be safety critical How to upgrade software in a car engine? 268 Implications on Operating Systems General purpose operating systems not well suited for real-time Assume plentiful resources, fairly shared amongst untrusted users Serve multiple purposes Exactly opposite of an RTOS! Instead want an operating system that is: Small and light on resources Predictable Customisable, modular and extensible Reliable ... and that can be demonstrated or proven to be so 269 Implications on Operating Systems Real-time operating systems typically either cyclic executive or microkernel designs, rather than a traditional monolithic kernel Limited and well defined functionality Easier to demonstrate correctness Easier to customise Provide rich support for concurrency & real-time control Expose low-level system details to the applications control of scheduling, interaction with hardware devices, ... 270 Cyclic Executive without Interrupts The simplest real-time systems use a “nanokernel” design Provides a minimal time service: scheduled clock pulse with a fixed period No tasking, virtual memory/memory protection etc. Allows implementation of a static cyclic schedule, provided: Tasks can be scheduled in a frame-based manner All interactions with hardware to be done on a polled basis Operating system becomes a single task cyclic executive 271 Microkernel Architecture Cyclic executive widely used in low-end embedded devices 8 bit processors with kilobytes of memory Often programmed in (something like) C via cross-compiler, or assembler Simple hardware interactions Fixed, simple, and static task set to execute Clock driven scheduler But many real-time embedded systems are more complex, need a sophisticated operating system with priority scheduling Common approach: a microkernel with priority scheduler Configurable and robust, since architected around interactions between cooperating system servers, rather than a monolithic kernel with ad-hoc interactions 272 Microkernel Architecture A microkernel RTOS typically provides: Timing services, interrupt handling, support for hardware interaction Task management, scheduling Messaging, signals Synchronization and locking Memory management (and sometimes also protection) 273 Example RTOS: FreeRTOS RTOS for embedded devices (ported to many microcontrollers from more than 20 manufacturers) Distributed under GPL Written in C, kernel consists of 3+1 C source files (approx. 9000 lines of code including comments) Largely configurable 274 Example RTOS: FreeRTOS The OS is (more or less) a library of object modules; the application and OS modules are linked together in the resulting executable image Prioritized scheduling of tasks tasks correspond to threads (share the same address space; have their own execution stacks) highest priority executes; same priority ⇒ round robin implicit idle task executing when no other task executes ⇒ may be assigned functionality of a background server Synchronization using semaphores Communication using message queues Memory management no memory protection in basic version (can be extended) various implementations of memory management memory can/cannot be freed after allocation, best fit vs combination of adjacent memory block into a single one That’s (almost) all .... 275 Example RTOS: FreeRTOS Tiny memory requirements: e.g. IAR STR71x ARM7 port, full optimisation, minimum configuration, four priorities ⇒ size of the scheduler = 236 bytes each queue adds 76 bytes + storage area each task 64 bytes + the stack size 276