Real-Time Scheduling Scheduling of Reactive Systems [Some parts of this lecture are based on a real-time systems course of Colin Perkins http://csperkins.org/teaching/rtes/index.html] 75 Reminder of Basic Notions � Jobs are executed on processors and need resources � Parameters of jobs � temporal: � release time – ri � execution time – ei � absolute deadline – di � derived params: relative deadline (Di), completion time, response time, ... � functional: � laxity type: hard vs soft � preemptability � interconnection � precedence constraints (independence) � resource � what resources and when are used by the job � Tasks = sets of jobs 76 Reminder of Basic Notions � Schedule assigns, in every time instant, processors and resources to jobs � valid schedule = correct (common sense) � Feasible schedule = valid and all hard real-time tasks meet deadlines � Set of jobs is schedulable if there is a feasible schedule for it � Scheduling algorithm computes a schedule for a set of jobs � Scheduling algorithm is optimal if it always produces a feasible schedule whenever such a schedule exists, and if a cost function is given, minimizes the cost We have considered scheduling of individual jobs 77 Scheduling Reactive Systems From this point on we concentrate on reactive systems i.e. systems that run for unlimited amount of time Recall that a task is a set of related jobs that jointly provide some system function. � We consider various types of tasks � Periodic � Aperiodic � Sporadic � Differ in execution time patterns for jobs in the tasks � Must be modeled differently � Differing scheduling algorithms � Differing impact on system performance � Differing constraints on scheduling 78 Periodic Tasks � A set of jobs that are executed repeatedly at regular time intervals can be modeled as a periodic task Time Ji,1 ri,1 Ji,2 ri,2 Ji,3 ri,3 Ji,4 ri,4 · · · ϕi � Each periodic task Ti is a sequence of jobs Ji,1, Ji,2, . . . Ji,n, . . . � The phase ϕi of a task Ti is the release time ri,1 of the first job Ji,1 in the task Ti ; tasks are in phase if their phases are equal � The period pi of a task Ti is the minimum length of all time intervals between release times of consecutive jobs in Ti � The execution time ei of a task Ti is the maximum execution time of all jobs in Ti � The relative deadline Di is relative deadline of all jobs in Ti (The period and execution time of every periodic task in the system are known with reasonable accuracy at all times) 79 Periodic Tasks – Notation The 4-tuple Ti = (ϕi, pi, ei, Di) refers to a periodic task Ti with phase ϕi, period pi, execution time ei, and relative deadline Di For example: jobs of T1 = (1, 10, 3, 6) are � released at times 1, 11, 21, . . ., � execute for 3 time units, � have to be finished in 6 time units (the first by 7, the second by 17, ...) Default phase of Ti is ϕi = 0 and default relative deadline is di = pi T2 = (10, 3, 6) satisfies ϕ = 0, pi = 10, ei = 3, Di = 6, i.e. jobs of T2 are � released at times 0, 10, 20, . . ., � execute for 3 time units, � have to be finished in 6 time units (the first by 6, the second by 16, ...) T3 = (10, 3) satisfies ϕ = 0, pi = 10, ei = 3, Di = 10, i.e. jobs of T3 are � released at times 0, 10, 20, . . ., � execute for 3 time units, � have to be finished in 10 time units (the first by 10, the second by 20, ...) 80 Periodic Tasks – Hyperperiod The hyper-period H of a set of periodic tasks is the least common multiple of their periods If tasks are in phase, then H is the time instant after which the pattern of job release/execution times starts to repeat 0 5 10 15 20 25 30 H H 81 Aperiodic and Sporadic Tasks � Many real-time systems are required to respond to external events � The tasks resulting from such events are sporadic and aperiodic tasks � Sporadic tasks – hard deadlines of jobs e.g. autopilot on/off in aircraft � Aperiodic tasks – soft deadlines of jobs e.g. sensitivity adjustment of radar surveilance system � Inter-arrival times between consecutive jobs are identically and independently distributed according to a probability distribution A(x) � Execution times of jobs are identically and independently distributed according to a probability distribution B(x) � In the case of sporadic tasks, the usual goal is to decide, whether a newly released job can be feasibly scheduled with the remaining jobs in the system � In the case of aperiodic tasks, the usual goal is to minimize the average response time 82 Scheduling – Classification of Algorithms � Off-line vs Online � Off-line – sched. algorithm is executed on the whole task set before activation � Online – schedule is updated at runtime every time a new task enters the system � Optimal vs Heuristic � Optimal – algorithm computes a feasible schedule and minimizes cost of soft real-time jobs � Heuristic – algorithm is guided by heuristic function; tends towards optimal schedule, may not give one The main division is on � Clock-Driven � Priority-Driven 83 Scheduling – Clock-Driven � Decisions about what jobs execute when are made at specific time instants � these instants are chosen before the system begins execution � Usually regularly spaced, implemented using a periodic timer interrupt � Scheduler awakes after each interrupt, schedules jobs to execute for the next period, then blocks itself until the next interrupt E.g. the helicopter example with the interrupt every 1/180 th of a second � Typically in clock-driven systems: � All parameters of the real-time jobs are fixed and known � A schedule of the jobs is computed off-line and is stored for use at runtime; thus scheduling overhead at run-time can be minimized � Simple and straight-forward, not flexible 84 Scheduling – Priority-Driven � Assign priorities to jobs, based on some algorithm � Make scheduling decisions based on the priorities, when events such as releases and job completions occur � Priority scheduling algorithms are event-driven � Jobs are placed in one or more queues; at each event, the ready job with the highest priority is executed (The assignment of jobs to priority queues, along with rules such as whether preemption is allowed, completely defines a priority-driven alg.) � Priority-driven algs. make locally optimal scheduling decisions � Locally optimal scheduling is often not globally optimal � Priority-driven algorithms never intentionally leave idle processors � Typically in priority-driven systems: � Some parameters do not have to be fixed or known � A schedule is computed online; usually results in larger scheduling overhead as opposed to clock-driven scheduling � Flexible – easy to add/remove tasks or modify parameters 85 Clock-Driven & Priority-Driven Example T1 T2 T3 pi 3 5 10 ei 1 2 1 Clock-Driven: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 · · · · · · · · · Priority-driven: T1 � T2 � T3 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 86 Real-Time Scheduling Scheduling of Reactive Systems Clock-Driven Scheduling 87 Current Assumptions � Fixed number, n, of periodic tasks T1, . . . , Tn � Parameters of periodic tasks are known a priori � Execution time ei,k of each job Ji,k in a task Ti is fixed � For a job Ji,k in a task Ti we have � ri,1 = ϕi = 0 (i.e., synchronized) � ri,k = ri,k−1 + pi � We allow aperiodic jobs � assume that the system maintains a single queue for aperiodic jobs � Whenever the processor is available for aperiodic jobs, the job at the head of this queue is executed � We treat sporadic jobs later 88 Static, Clock-Driven Scheduler � Construct a static schedule offline � The schedule specifies exactly when each job executes � The amount of time allocated to every job is equal to its execution time � The schedule repeats each hyperperiod i.e. it suffices to compute the schedule up to hyperperiod � Can use complex algorithms offline � Runtime of scheduling algorithm is not relevant � Can compute a schedule that optimizes some characteristics of the system e.g. a schedule where the idle periods are nearly periodic (useful to accommodate aperiodic jobs) 89 Example T1 = (4, 1), T2 = (5, 1.8), T3 = (20, 1), T4 = (20, 2) Hyperperiod H = 20 0 4 8 12 16 20 24 Aper. T1 T2 T3 T4 90 Implementation of Static Scheduler � Store pre-computed schedule as a table � Each entry (tk , T(tk )) gives � a decision time tk � scheduling decision T(tk ) which is either a task to be executed, or idle (denoted by I) � The system creates all tasks that are to be executed: � Allocates memory for the code and data � Brings the code into memory � Scheduler sets the hardware timer to interrupt at the first decision time t0 = 0 � On receipt of an interrupt at tk : � Scheduler sets the timer interrupt to tk+1 � If previous task overrunning, handle failure � If T(tk ) = I and aperiodic job waiting, start executing it � Otherwise, start executing the next job in T(tk ) 91 Example T1 = (4, 1), T2 = (5, 1.8), T3 = (20, 1), T4 = (20, 2) Hyperperiod H = 20 0 4 8 12 16 20 24 Aper. T1 T2 T3 T4 tk 0.0 1.0 2.0 3.8 4.0 5.0 6.0 · · · T(tk ) T1 T3 T2 I T1 I T4 · · · 92 Frame Based Scheduling � Arbitrary table-driven cyclic schedules flexible, but inefficient � Relies on accurate timer interrupts, based on execution times of tasks � High scheduling overhead � Easier to implement if a structure is imposed � Make scheduling decisions at periodic intervals (frames) of length f � Execute a fixed list of jobs within each frame; no preemption within frames � Gives two benefits: � Scheduler can easily check for overruns and missed deadlines at the end of each frame. � Can use a periodic clock interrupt, rather than programmable timer. How to choose the size of frames? How to compute a schedule? To simplify further development, assume that periods are in N and choose frame sizes in N. 93 Frame Based Scheduling – Frame Size How to choose the frame length? 1. Necessary condition for avoiding preemption of jobs is f ≥ max i ei (i.e. we want each job to have a chance to finish within a frame) 2. To minimize the number of entries in the cyclic schedule, the hyper-period should be an integer multiple of the frame size, i.e. ∃i : mod (pi, f) = 0 for some task Ti. 3. To allow scheduler to check that jobs complete by their deadline, at least one frame should lie between release time of a job and its deadline, which is equivalent to 2 ∗ f − gcd(pi, f) ≤ Di for all tasks Ti All three constraints should be satisfied. 94 Frame Based Scheduling – Frame Size – Example Example 12 T1 = (4, 1.0), T2 = (5, 1.8), T3 = (20, 1.0), T4 = (20, 2.0) Then f ∈ N satisfies 1.–3. iff f = 2. With f = 2 is schedulable: 95 Frame Based Scheduling – Job Slices � Sometimes a system cannot meet all three frame size constraints simultaneously (and even if it meets the constraints, no non-preemptive schedule is feasible) � Can be solved by partitioning a job with large execution time into slices with shorter execution times This, in effect, allows preemption of the large job � Consider T1 = (4, 1), T2 = (5, 2, 7), T3 = (20, 5) � Cannot satisfy constraints: 1. ⇒ f ≥ 5 but 3. ⇒ f ≤ 4 � Solve by splitting T3 into T3,1 = (20, 1), T3,2 = (20, 3), and T3,3 = (20, 1) (Other splits exist) � Result can be scheduled with f = 4 96 Building a Structured Cyclic Schedule To construct a schedule, we have to make three kinds of design decisions (that cannot be taken independently): � Choose a frame size based on constraints � Partition jobs into slices � Place slices into frames There are efficient algorithms for solving these problems based e.g. on a reduction to the network flow problem. 97 Frame Based Scheduling – Cyclic Executive � Modify previous table-driven scheduler to be frame based � Table that drives the scheduler has F entries, where F = H/f � The k-th entry L(k) lists the names of the job slices that are to be scheduled in frame k (L(k) is called scheduling block) � Each job slice is implemented by a procedure � Cyclic executive executed by the clock interrupt that signals the start of a frame: � If an aperiodic job is executing, preempts it; if a periodic overruns, handles the overrun � Determines the appropriate scheduling block for this frame � Executes the jobs in the scheduling block � Executes jobs from the head of the aperiodic job queue for the remainder of the frame � Less overhead than pure table driven cyclic scheduler, since only interrupted on frame boundaries, rather than on each job 98 Scheduling Aperiodic Jobs So far, aperiodic jobs scheduled in the background after all jobs with hard deadlines This may unnecessarily delay aperiodic jobs Note: There is no advantage in completing periodic jobs early Ideally, finish periodic jobs by their respective deadlines. Slack Stealing: � Slack time in a frame = the time left in the frame after all (remaining) slices execute � Schedule aperiodic jobs ahead of periodic in the slack time of periodic jobs � The cyclic executive keeps track of the slack time left in each frame as the aperiodic jobs execute, preempts them with periodic jobs when there is no more slack � As long as there is slack remaining in a frame and the aperiodic jobs queue is non-empty, the executive executes aperiodic jobs, otherwise executes periodic � Reduces resp. time for aper. jobs, but requires accurate timers 99 Example Assume that the aperiodic queue is never empty. Aperiodic at the ends of frames: 0 4 8 12 16 20 24 Aper. T1 T2 T3 T4 Slack stealing: 0 4 8 12 16 20 24 Aper. T1 T2 T3 T4 100 Slack Stealing – cont. Sl. S. Standard Rel. aper. Period. 101 Frame Based Scheduling – Sporadic Jobs Let us allow sporadic jobs i.e. hard real-time jobs whose release and exec. times are not known a priori The scheduler determines whether to accept a sporadic job when it arrives (and its parameters become known) � Perform acceptance test to check whether the new sporadic job can be feasibly scheduled with all the jobs (periodic and sporadic) in the system at that time Acceptance check done at the beginning of the next frame; has to keep execution times of the parts of sporadic jobs that have already executed � If there is sufficient slack time in the frames before the new job’s deadline, the new sporadic job is accepted; otherwise, rejected � Among themselves, sporadic jobs scheduled according to EDF This is optimal for sporadic jobs Note: rejection is often better than missing deadline e.g. a robotic arm taking defective parts off a conveyor belt: if the arm cannot meet deadline, the belt may be slowed down or stopped 102 � S1(17, 4.5) released at 3 with abs. deadline 17 and execution time 4.5; acceptance test at 4; must be scheduled in frames 2, 3, 4; total slack in these frames is 4, i.e. rejected � S2(29, 4) released at 5 with abs. deadline 29 and exec. time 4; acc. test at 8; total slack in frames 3-7 is 5.5, i.e. accepted � S3(22, 1.5) released at 11 with abs. deadline 22 and exec. time 1.5; acc. test at 12; 2 units of slack in frames 4, 5 as S3 will be executed ahead of the remaining parts of S2 by EDF – check whether there will be enough slack for the remaining parts of S2, accepted � S4(44, 5.0) is rejected (only 4.5 slack left) 103 Handling Overruns Overruns may happen due to failures e.g. unexpectedly large data over which the system operates, hardware failures, etc. Ways to handle overruns: � Abort the overrun job at the beginning of the next frame; log the failure; recover later e.g. control law computation of a robust digital controller � Preempt the overrun job and finish it as an aperiodic job use this when aborting job would cause “costly” inconsistencies � Let the overrun job finish – start of the next frame and the execution jobs scheduled for this frame are delayed This may cause other jobs to be delayed depends on application 104 Clock-drive Scheduling: Conclusions Advantages: � Conceptual simplicity � Complex dependencies, communication delays, and resource contention among jobs can be considered when constructing the static schedule � Entire schedule in a static table � No concurrency control or synchronization needed � Easy to validate, test and certify Disadvantages: � Inflexible � If any parameter changes, the schedule must be usually recomputed Best suited for systems which are rarely modified (e.g. controllers) � Parameters of the jobs must be fixed As opposed to most priority-driven schedulers 105