Real-Time Scheduling Scheduling of Reactive Systems Priority-Driven Scheduling 106 Current Assumptions � Single processor � Fixed number, n, of independent periodic tasks i.e. there is no dependency relation among jobs � Jobs can be preempted at any time and never suspend themselves � No aperiodic and sporadic jobs � No resource contentions Moreover, unless otherwise stated, we assume that � Scheduling decisions take place precisely at � release of a job � completion of a job (and nowhere else) � Context switch overhead is negligibly small i.e. assumed to be zero � There is an unlimited number of priority levels 107 Fixed-Priority vs Dynamic-Priority Algorithms A priority-driven scheduler is on-line i.e. it does not precompute a schedule of the tasks � It assigns priorities to jobs after they are released and places the jobs in a ready job queue in the priority order with the highest priority jobs at the head of the queue � At each scheduling decision time, the scheduler updates the ready job queue and then schedules and executes the job at the head of the queue i.e. one of the jobs with the highest priority Fixed-priority = all jobs in a task are assigned the same priority Dynamic-priority = jobs in a task may be assigned different priorities Note: In our case, a priority assigned to a job does not change. There are job-level dynamic priority algorithms that vary priorities of individual jobs – we won’t consider such algorithms. 108 Fixed-priority Algorithms – Rate Monotonic Best known fixed-priority algorithm is rate monotonic (RM) scheduling that assigns priorities to tasks based on their periods � The shorter the period, the higher the priority � The rate is the inverse of the period, so jobs with higher rate have higher priority RM is very widely studied and used Example 13 T1 = (4, 1), T2 = (5, 2), T3 = (20, 5) with rates 1/4, 1/5, 1/20, respectively The priorities: T1 � T2 � T3 0 4 8 12 16 20 T3 T2 T1 109 Fixed-priority Algorithms – Deadline Monotonic The deadline monotonic (DM) algorithm assigns priorities to tasks based on their relative deadlines � the shorter the deadline, the higher the priority Observation: When relative deadline of every task matches its period, then RM and DM give the same results Proposition 1 When the relative deadlines are arbitrary DM can sometimes produce a feasible schedule in cases where RM cannot. 110 Rate Monotonic vs Deadline Monotonic T1 = (50, 50, 25, 100), T2 = (0, 62.5, 10, 20), T3 = (0, 125, 25, 50) DM is optimal (with priorities T2 � T3 � T1): 50 100 150 200 250 0 62.5 125 187.5 250 0 125 250 20 82.5 145 207.5 50 175 T3 T2 T1 RM is not optimal (with priorities T1 � T2 � T3): 50 100 150 200 250 0 62.5 125 187.5 250 0 125 250 20 82.5 145 207.5 50 175 T3 T2 T1 111 Dynamic-priority Algorithms Best known is earliest deadline first (EDF) that assigns priorities based on current (absolute) deadlines � At the time of a scheduling decision, the job queue is ordered by earliest deadline Another one is the least slack time (LST) � The job queue is ordered by least slack time Recall that the slack time of a job Ji at time t is equal to di − t − x where x is the remaining computation time of Ji at time t Comments: � There is also a strict LST which reassigns priorities to jobs whenever their slacks change relative to each other – won’t consider � Standard “non real-time” algorithms such as FIFO and LIFO are also dynamic-priority algorithms We focus on EDF here, leave some LST for homework 112 EDF – Example T1 = (2, 1) and T2 = (5, 2.5) 0 1 2 3 4 5 6 7 8 9 10 T2 T1 Note that the processor is 100% “utilized”, not surprising :-) 113 Summary of Priority-Driven Algorithms We consider: Dynamic-priority: � EDF = at the time of a scheduling decision, the job queue is ordered by the earliest deadline Fixed-priority: � RM = assigns priorities to tasks based on their periods � DM = assigns priorities to tasks based on their relative deadlines (In all cases, ties are broken arbitrarily.) We consider the following questions: � Are the algorithms optimal? � How to efficiently (or even online) test for schedulability? To measure abilities of scheduling algorithms and to get fast online tests of schedulability we use a notion of utilization 114 Utilization � Utilization ui of a periodic task Ti with period pi and execution time ei is defined by ui := ei/pi ui is the fraction of time a periodic task with period pi and execution time ei keeps a processor busy � Total utilization UT of a set of tasks T = {T1, . . . , Tn} is defined as the sum of utilizations of all tasks of T , i.e. by UT := n� i=1 ui � U is a schedulable utilization of an algorithm ALG if all sets of tasks T satisfying UT ≤ U are schedulable by ALG. Maximum schedulable utilization UALG of an algorithm ALG is the supremum of schedulable utilizations of ALG. � If UT < UALG, then T is schedulable by ALG. � If U > UALG, then there is T with UT ≤ U that is not schedulable by ALG. 115 Utilization – Example � T1 = (2, 1) then u1 = 1 2 � T1 = (11, 5, 2, 4) then u1 = 2 5 (i.e., the phase and deadline do not play any role) � T = {T1, T2, T3} where T1 = (2, 1), T2 = (6, 1), T3 = (8, 3) then UT = 1 2 + 1 6 + 3 8 = 25 24 116 Real-Time Scheduling Priority-Driven Scheduling Dynamic-Priority 117 Optimality of EDF Theorem 14 Let T = {T1, . . . , Tn} be a set of independent, preemptable periodic tasks with Di ≥ pi for i = 1, . . . , n. The following statements are equivalent: 1. T can be feasibly scheduled on one processor 2. UT ≤ 1 3. T is schedulable using EDF (i.e., in particular, UEDF = 1) Proof. 1.⇒2. We prove that UT > 1 implies that T is not schedulable (whiteb.) 2.⇒3. Next slides and whiteboard ... 3.⇒1. Trivial � 118 Proof of 2.⇒3. – Simplified Let us start with a proof of a special case (see the assumptions A1 and A2 below). Then a complete proof will be presented. We prove ¬3.⇒ ¬2. assuming that Di = pi for i = 1, . . . , n. (Note that the general case immediately follows.) Assume that T is not schedulable according to EDF. (Our goal is to show that UT > 1.) This means that there must be at least one job that misses its deadline when EDF is used. Simplifying assumptions: A1 Suppose that all tasks are in phase, i.e. the phase ϕ� = 0 for every task T�. A2 Suppose that the first job Ji,1 of a task Ti misses its deadline. By A1, Ji,1 is released at 0 and misses its deadline at pi. Assume w.l.o.g. that this is the first time when a job misses its deadline. (To simplify even further, you may (privately) assume that no other job has its deadline at pi.) 119 Proof of 2.⇒3. – Simplified Let G be the set of all jobs that are released in [0, pi] and have their deadlines in [0, pi]. Crucial observations: � G contains Ji,1 and all jobs that preempt Ji,1. (If there are more jobs with deadline pi, then these jobs do not have to preempt Ji,1. Assume the worst case: all these jobs preempt Ji,1.) � The processor is never idle during [0, pi] and executes only jobs of G. Denote by EG the total execution time of G, that is, the sum of execution times of all jobs in G. Corollary of the crucial observation: EG > pi because otherwise Ji,1 (and all jobs that preempt it) would complete by pi. Let us compute EG. 120 Proof of 2.⇒3. – Simplified Since we assume ϕ� = 0 for every T�, the first job of T� is released at 0, and thus � pi p� � jobs of T� belong to G. E.g., if p� = 2 and pi = 5 then three jobs of T� are released in [0, 5] (at times 0, 2, 4) but only 2 = � 5 2 � = � pi p� � of them have their deadlines in [0, pi]. Thus the total execution time EG of all jobs in G is EG = n� �=1 � pi p� � e� But then pi < EG = n� �=1 � pi p� � e� ≤ n� �=1 pi p� e� ≤ pi n� �=1 u� ≤ pi · UT which implies that UT > 1. 121 Proof of 2.⇒3. – Complete Now let us drop the simplifying assumptions A1 and A2 ! Notation: Given a set of tasks L, we denote by � L the set of all jobs of the tasks in L. We prove ¬3.⇒ ¬2. assuming that Di = pi for i = 1, . . . , n (note that the general case immediately follows). Assume that T is not schedulable by EDF. We show that UT > 1. Suppose that a job Ji,k of Ti misses its deadline at time t = ri,k + pi. Assume that this is the earliest deadline miss. Let T � be the set of all tasks whose jobs have deadlines (and thus also release times) in [ri,k , t] (i.e., a task belongs to T � iff at least one job of the task is released in [ri,k , t]). Let t− be the end of the latest interval before t in which either jobs of� (T � T � ) are executed, or the processor is idle. Then ri,k ≥ t− since all jobs of � (T � T � ) waiting for execution during [ri,k , t] have deadlines later than t (thus have lower priorities than Ji,k ). 122 Proof of 2.⇒3. – Complete (cont.) It follows that � no job of � (T � T � ) is executed in [t−, t], (by definition of t−) � the processor is fully utilized in [t−, t]. (by definition of t−) � all jobs (that all must belong to � T � ) executed in [t−, t] are released in [t−, t] and have their deadlines in [t−, t] since � no job of � T � executes just before t−, � all jobs of � T � released in [t−, ri,k ] have deadlines before t, � jobs of � T � released in [ri,k , t] with deadlines after t are not executed in [ri,k , t] as they have lower priorities than Ji,k . Let G be the set of all jobs that are released in [t−, t] and have their deadlines in [t−, t]. Note that Ji,k ∈ G since ri,k ≥ t−. Denote by EG the sum of all execution times of all jobs in G (the total execution time of G). 123 Proof of 2.⇒3. – Complete (cont.) Now EG > t − t− because otherwise Ji,k would complete in [t−, t]. How to compute EG? For T� ∈ T � , denote by R� the earliest release time of a job in T� during the interval [t−, t]. For every T� ∈ T � , exactly � t−R� p� � jobs of T� belong to G. (For every T� ∈ T � T � , exactly 0 jobs belong to G.) Thus EG = � T�∈T � � t − R� p� � e� As argued above: t−t− < EG = � T�∈T � � t − R� p� � e� ≤ � T�∈T � t − t− p� e� ≤ (t−t−) � T�∈T � u� ≤ (t−t−)UT which implies that UT > 1. 124 Density and EDF What about tasks with Di < pi ? Density of a task Ti with period pi, execution time ei and relative deadline Di is defined by ei/ min(Di, pi) Total density ΔT of a set of tasks T is the sum of densities of tasks in T Note that if Di < pi for some i, then ΔT > UT Theorem 15 A set T of independent, preemptable, periodic tasks can be feasibly scheduled on one processor if ΔT ≤ 1. Note that this is NOT a necessary condition! (Example whiteb.) 125 Schedulability Test For EDF The problem: Given a set of independent, preemptable, periodic tasks T = {T1, . . . , Tn} where each Ti has a period pi, execution time ei, and relative deadline Di, decide whether T is schedulable by EDF. Solution using utilization and density: If pi ≤ Di for each i, then it suffices to decide whether UT ≤ 1. Otherwise, decide whether ΔT ≤ 1: � If yes, then T is schedulable with EDF � If not, then T does not have to be schedulable Note that � Phases of tasks do not have to be specified � Parameters may vary: increasing periods or deadlines, or decreasing execution times does not prevent schedulability 126 Schedulability Test for EDF – Example Consider a digital robot controller � A control-law computation � takes no more than 8 ms � the sampling rate: 100 Hz, i.e. computes every 10 ms Feasible? Trivially yes .... � Add Built-In Self-Test (BIST) � maximum execution time 50 ms � want a minimal period that is feasible (max one second) With 250 ms still feasible .... � Add a telemetry task � maximum execution time 15 ms � want to minimize the deadline on telemetry period may be large Reducing BIST to once a second, deadline on telemetry may be set to 100 ms .... 127