Priority Scheduling for shared resources – semaphores

Safety-critical real-time systems have both functional and non-functional requirements. The functional requirement specifies what an application does while the non-functional requirement specifies the quality of the functional behaviors.

Examples of non-functional behaviors are throughput, energy consumption, timeliness, and so on. In the context of real-time systems, the correctness of the system depends not only on the logical output of the functions, but also on when the output is generated. In other words, generating the correct output within a certain deadline is of utmost importance for safety-critical real-time systems.


Spacecraft is an example of a safety-critical real-time system. Missing the deadline of some function may compromise the success of the mission which often costs several billion dollars for design, development and implementation.

Priority Scheduling

Many real-time applications (e.g., control and monitoring) are modeled as a collection of periodic tasks. Each periodic task becomes available for execution at the beginning of the period and must complete its execution before next period begins (i.e., the deadline of the task). Each instance of a periodic task is called a job.
Scheduling algorithm plays the vital role in determining which task to dispatch to processor for execution, at which time instant, when several tasks are available for execution.

A task is said to be active when it has arrived and has not completed its execution. Preemptive fixed-priority scheduling is a preferable means in the industry for executing periodic tasks: each task is statically assigned one fixed priority and the scheduler always dispatches the highest priority active task by preempting, if any, the execution of a lower priority task. In other words, a higher priority task can preempt the execution of a lower priority task and a lower priority task cannot preempt the execution of a higher priority task. A lower-priority task resumes its execution later when there is no higher priority active task awaiting execution.


Tasks generally share resources, for example, data structure, main memories, files, processor registers, I/O units, communication bus, and so on. It is important to enforce effective synchronization mechanisms when multiple tasks share a resource in order to ensure consistent view of the system. One way to ensure such synchronization is using semaphore for each shared resource. A task holding a semaphore has exclusive access to that resource. No task can hold a semaphore if it is currently locked by another task (i.e., at most one task can execute in the critical section).



How it works

We denote SR the semaphore for shared resource R. A job invokes wait(SR) and signal(SR) commands to grab and release the resource R, respectively. If resource R is already in use by some job J2 of a lower priority task, then another job J1 of a higher priority task is blocked when invokes wait(SR). After job J2 releases resource R by calling signal(SR), job J1 can grab the semaphore and gets access to resource R. Note that a higher priority tasks can preempt the execution of a lower priority task. A lower priority task can block the execution of a higher priority task if they share some resource and the resource is currently in possession of the lower priority task. To see how it happens, consider the following two jobs J1 and J2 where J1 has higher fixed priority than job J2 and resource R is shared between the two jobs.

semaphores - resource share

Figure 1: The request and release requests of the shared resource R by jobs J1 and J2. Blocking of the higher priority job J1 by the lower priority job J2 can happen depending on when tasks are executed.

The Unavoidable Blocking

Consider the following execution pattern in Figure 2 of the two jobs J1 and J2 where prio(J1) > prio(J2). This situation in Figure 2 is called unavoidable blocking where the higher priority job J1 must be blocked since the lower priority job J2 is holding R when J1 needs it.


Figure 2: Unavoidable blocking of job J1 by job J2.

Job J2 starts its normal execution and then access the semaphore and executes in the critical section. While J2 is executing its critical section, job J1 arrives, preempt the execution of J2 and starts its normal execution. At time instant t1, job J1 needs resource R and is blocked since the semaphore of R is locked by job J2. At this moment, job J2 starts execution and completes the critical section by time instant t2 and releases the semaphore. Then J1 starts execution by preempting job J2 and completes. Finally, job J2 resumes its execution and finishes it execution

Points to be taken care of

There are mature fixed-priority scheduling algorithms for scheduling a set of periodic real-time tasks with the assumption that tasks are independent, i.e., the tasks do not share any other resource except the processing platform. However, this assumption does not always hold and problem begins when tasks share other resources. Semaphores are primarily used to allow the use of shared resources. This leads to a synchronization problem called Priority inversion problem due to shared resources in real-time systems. The most popular example of this particular problem is the Mars Pathfinder.


Leave a Reply

Your email address will not be published. Required fields are marked *