Real-Time Scheduling Priority-Driven Scheduling Fixed-Priority 1 Fixed-Priority Algorithms We consider a set of tasks T = {T1, . . . , Tn} Any fixed-priority algorithm schedules tasks of T according to fixed (distinct) priorities assigned to tasks We write Ti Tj whenever Ti has a higher priority than Tj We denote by Ti↑ the set of tasks that have the same or higher priority than Ti. Recall that Fixed-Priority Algorithms might not be optimal Consider T = {T1, T2} where T1 = (2, 1) and T2 = (5, 2.5) UT = 1 and thus T is schedulable by EDF If T1 T2, then J2,1 misses its deadline If T2 T1, then J1,1 misses its deadline 2 Critical Instant – Informally To be able to further analyze fixed-priority algorithms we need to consider a notion of critical instant Intuitively, a critical instant is the time instant in which the system is most loaded, and has its worst response time Schedulability of a set of tasks is determined by response times of jobs released at critical instants 3 Critical Instant – Formally Definition 1 A critical instant tcrit of a task Ti is a time instant in which a job Ji,k in Ti is released so that Ji,k either does not meet its deadline, or has the maximum response time of all jobs in Ti Denote by Wi the response time of such Ji,k Theorem 2 In a fixed-priority system where every job completes before the next job in the same task is released, a critical instant of a task Ti occurs when one of its jobs Ji,k is released at the same time with a job from every higher-priority task. Note that the situation described in the theorem does not have to occur if tasks are not in phase. So we use critical instants either to study tasks in phase, or to get upper bounds on schedulability as follows: Set phases of all tasks to zero, which gives a new set of tasks T = {T1 , . . . , Tn} Determine the response time w of the first job Ji,1 in Ti Then w ≥ Wi, the response time of a job in T released at the critical instant 4 RMa and DM Algorithms (reminder) RM = assigns priorities to tasks based on their periods the priority is inversely proportional to the period pi DM = assigns priorities to tasks based on their relative deadlines the priority is inversely proportional to the relative deadline Di (In all cases, ties are broken arbitrarily.) We consider the following questions: Are the algorithms optimal? How to efficiently (or even online) test for schedulability? 5 Optimality of RM for Simply Periodic Tasks Definition 3 A set {T1, . . . , Tn} is simply periodic if for every pair Ti, Tk satisfying pi < pk we have that pk is an integer multiple of pi Example 4 The helicopter control system from the first lecture Theorem 5 A set T of n simply periodic, independent, preemptable tasks with Di = pi is schedulable on one processor according to RM iff UT = n i=1 ei pi ≤ 1. i.e. on simply periodic tasks RM is as good as EDF 6 Optimality of DM (RM) among Fixed-Priority Algs. Theorem 6 A set of independent, preemptable periodic tasks with Di ≤ pi that are in phase (i.e., ϕi = 0 for all i = 1, . . . , n) can be feasibly scheduled on one processor according to DM if it can be feasibly scheduled by some fixed-priority algorithm. Proof. Assume a fixed-priority feasible schedule with T1 · · · Tn. Consider the least i such that the relative deadline Di of Ti is larger than the relative deadline Di+1 of Ti+1. Swap the priorities of Ti and Ti+1. The resulting schedule is still feasible. DM is obtained by using finitely many swaps. Note: If the assumptions of the above theorem hold and all relative deadlines are equal to periods, then RM is optimal among all fixed-priority algorithms. 7 Fixed-Priority Algorithms: Schedulability We consider two schedulability tests: Schedulable utilization URM of the RM algorithm. Time-demand analysis based on response times of jobs released at critical instants 8 Schedulable Utilization for RM Theorem 7 Let us fix n ∈ N and consider only independent, preemptable periodic tasks with Di = pi. If T is a set of n tasks satisfying UT ≤ n(21/n − 1), then UT is schedulable by the RM algorithm. For every U > n(21/n − 1) there is a set T of n tasks satisfying UT ≤ U that is not schedulable by RM. 9 Schedulable Utilization for RM It follows that the maximum schedulable utilization URM over independent, preemptable periodic tasks satisfies URM = inf n n(21/n − 1) = lim n→∞ n(21/n − 1) = ln 2 ≈ 0.693 Note that UT ≤ n(21/n − 1) is a sufficient but not necessary condition for schedulability of T using the RM algorithm (an example will be given later) In what follows we assume that p1 < p2 < . . . < pn which implies that T1 T2 · · · Tn 10 Proof Sketch of Theorem 7 A set of tasks T fully utilizes the processor if it is schedulable by RM but any increase in execution time makes the set unschedulable. Given n ∈ N, we denote by bn the greatest lower bound on utilization over all sets of n tasks that fully utilize the processor. We prove that bn = n(2 1 n − 1) Proof: Given p1, . . . , pn, denote by U[p1, . . . , pn] the minimum utilization over sets of n tasks with periods p1, . . . , pn that fully utilize the processor. (A) Show that for every set that fully utilizes the processor there is another one whose utilization cannot be larger and 1. is in phase, 2. satisfies pn ≤ 2p1. In the rest of the proof we assume 1. and 2. (B) For a fixed set of periods p1, . . . , pn, find a set T with periods p1, . . . , pn and utilization U[p1, . . . , pn]. (C) Show that minp1,...,pn U[p1, . . . pn] = n(21/n − 1) 11 Proof Sketch of Theorem 7 A set of tasks T fully utilizes the processor if it is schedulable by RM but any increase in execution time makes the set unschedulable. Given n ∈ N, we denote by bn the greatest lower bound on utilization over all sets of n tasks that fully utilize the processor. We prove that bn = n(2 1 n − 1) It immediately follows that for every U > bn there is a set of n tasks T that is not schedulable by RM but satisfies UT ≤ U We prove that if UT ≤ bn for a given set T of n tasks, then T is schedulable using the RM algorithm by induction on n 12 Proof Sketch of Theorem 7 – Step (B) In general, for pn ≤ 2p1, the following instance gives bn: 0 p1 2p1 0 p2 0 p3 0 pn−1 0 pn ... T3 T2 T1 Tn Tn−1 ek = pk+1 − pk for k = 1, . . . , n − 1 en = pn − 2 n−1 k=1 ek = 2p1 − pn 13 Time-Demand Analysis Assume that Di ≤ pi for every i. Compute the total demand for processor time by a job released at a critical instant of a task, and by all the higher-priority tasks, as a function of time from the critical instant Check if this demand can be met before the deadline of the job: Consider one task Ti at a time, starting with highest priority and working to lowest priority Focus on a job Ji,c in Ti, where the release time, t0, of that job is a critical instant of Ti At time t0 + t for t ≥ 0, the processor time demand wi(t) for this job and all higher-priority jobs released in [t0, t] is bounded by wi(t) = ei + i−1 k=1 t pk ek for 0 < t ≤ pi (Recall that by the critical instant theorem, the longest response time occurs when jobs of all tasks with higher priority are released at t0) 14 Time-Demand Analysis Compare the time demand, wi(t), with the available time, t: If wi(t) ≤ t for some t ≤ Di, the job Ji,c released at critical instant of Ti meets its deadline, t0 + Di If wi(t) > t for all 0 < t ≤ Di, then the task probably cannot complete by its deadline; and the system likely cannot be scheduled using a fixed priority algorithm (Note that this condition is only sufficient as the expression for wi(t) relies on the fact that jobs of all higher priority tasks are released at the critical instant t0) Use this method to check that all tasks are schedulable if released at their critical instants; if so conclude the entire system can be scheduled 15 Time-Demand Analysis – Example Example: T1 = (3, 1), T2 = (5, 1.5), T3 = (7, 1.25), T4 = (9, 0.5) This is schedulable by RM even though U{T1,...,T4} = 0.85 > 0.757 = URM(4) 16 Time-Demand Analysis The time-demand function wi(t) is a staircase function Steps in the time-demand for a task occur at multiples of the period for higher-priority tasks The value of wi(t) − t linearly decreases from a step until the next step If our interest is the schedulability of a task, it suffices to check if wi(t) ≤ t at the time instants when a higher-priority job is released and at Di Our schedulability test becomes: Compute wi(t) Check whether wi(t) ≤ t for some t equal either to Di, or to j · pk where k = 1, 2, . . . , i and j = 1, 2, . . . , Di/pk 17 Time-Demand Analysis Time-demand analysis schedulability test is more complex than the schedulable utilization test but more general: Works for any fixed-priority scheduling algorithm, provided the tasks have short response time (Di ≤ pi) Can be extended to tasks with arbitrary deadlines Still more efficient than exhaustive simulation Only a sufficient test (as well as the utilization test for fixedpriority systems) 18 Dynamic vs Fixed Priority EDF pros: optimal very simple and complete test for schedulability cons: difficult to predict which job misses its deadline strictly following EDF in case of overloads assigns higher priority to jobs that missed their deadlines larger scheduling overhead DM (RM) pros: easier to predict which job misses its deadline (in particular, tasks are not blocked by lower priority tasks) easy implementation with little scheduling overhead (optimal in some cases often occurring in practice) cons: not optimal incomplete and more involved tests for schedulability 19