Che materia stai cercando?

Real-time and embedded operating systems Appunti scolastici Premium

This Guide covers the fundamentals of (i) real-time and embedded operating systems (focusing mostly on the differences with general purpose operating systems such as Linux), and (ii) real-time programming. The emphasis is on Free Software and Open Source Software examples: RTAI, RTLinux, eCos, RT-EMS, uCLinux ..., with a more than proportional focus on RTAI. This text also talks about design issues,... Vedi di più

Esame di Sistemi embedded docente Prof. L. Pomante

Anteprima

ESTRATTO DOCUMENTO

Chapter 2. Task management and scheduling

So, a “one-size-fits-all” scheduling algorithm does not exist. Although that is exactly what a general

purpose operating system hopes to offer. Hence, it should come as no surprise that there is a vast

literature on the theory of scheduling, accompanied by a large variety of (un)implemented scheduling

algorithms. A theoretically optimal schedule can only be reached in the unlikely situation of complete

knowledge about the processing, synchronization and communication requirements of each task, and the

processing and timing properties of the hardware. This state of complete knowledge is seldom reached in

real-world applications, especially when the requirements are dynamic (i.e., time varying). And even with

complete predictability, the general scheduling problem is NP-complete, which means that its complexity

increases exponentially with the number of tasks and constraints involved in the scheduling. And hence,

the scheduling algorithms don’t scale well under a growing load and/or hardware resources. This does

not imply, however, that the problem is infeasible for applications with only a few, well-defined tasks.

Each OS has a scheduler function (let’s call it ), that implements the scheduling algorithm.

schedule()

(Later sections discuss the most common scheduling algorithms.) This scheduler is not a task in itself: it

is a function call, that is called at various points in the kernel. These points are, not surprisingly, called

scheduling points. Typical scheduling points are: end of interrupt service routines (Section 3.3), the

moments when tasks want to go to sleep for one reason or another, or when they become ready to run.

Scheduling is pure overhead: all time spent on calculating which task to run next is lost for the really

productive tasks. And trying to use more optimal schedulers isn’t always a clever “solution”: advanced

schedulers consume (often unpredictably!) more time and resources, and their increased complexity

makes it more difficult for programmers to work with. Hence, the chance that those programmers make

the wrong design decisions increases. Simplicity is especially a key feature for real-time and embedded

systems; complex schedulers appear more in Operations Research applications, where the scheduling

problem and its algorithmic complexity are comparable to the operating system case, but where the

real-time constraints and the predictability of the cost of tasks are more manageable.

(TODO: explain (POSIX) cancellation points: why are they needed? what makes a point a valid

cancellation point? Warn against using cancellation, because it’s so error prone. Not only from the OS

point of view (that OS must make sure its thread and lock bookkeeping remains consistent, which is not

a simple job), but also from the application point of view (how do you make sure that there is no race

between one thread trying to cancel another thread, and a third thread that still wants to interact with that

to-be-cancelled thread? It’s way better to have each thread exit itself explicitly, and to have an explicit

exit condition for each thread. And to make thread interaction asynchronous.)

2.5. Priority-based scheduling

The simplest approach to the scheduling problem is to assign static priorities to all tasks. That means

that the priority is given to the task at the time it is created. The scheduler function is then

schedule()

very simple, because it looks at all wait queus at each priority level, and starts the task with the highest

priority that is ready to run.

Using priorities implies using pre-emption: interrupts a lower priority task in order to run a

schedule()

higher priority task that requests it. Pre-emption means that the running task’s context is switched out, 22

Chapter 2. Task management and scheduling

and the new task’s context is switched in.

One classifies priorities into statically and dynamically assigned priorities. In the former case, a task is

given a priority by the programmer at design time (or by the operator at system initialization time), and it

keeps this priority during its whole lifetime. In the dynamic case, becomes more complex,

schedule()

because it has to calculate the task’s priority on-line, based on a number of dynamically changing

parameters (time till next deadline; amount of work to process; etc.). As described before, the optimal

solution to a scheduling problem is usually impossible to find, so scheduling is often based on a set of

heuristics. This is the case for real-time as well as non-real-time schedulers. The heuristics in a general

purpose OS can be quite involved, but real-time and embedded operating systems mostly use simple

heuristics. Because “simple” means: faster and smaller and more predictable! Examples of such simple

dynamic scheduling algorithms, that are sometimes used to replace static priority scheduling, are:

Rate monotonic (RM). A task gets a higher priority if it has to run more frequently. This is a common

• approach in the case that all tasks are periodic. So, a task that has to run every n milliseconds gets a

higher priority than a task that runs every m milliseconds when n<m. Hence, changing the scheduling

frequency of a task on-line also changes its priority. The scheduler needs to know the periods of all

tasks it has to schedule.

Earliest deadline first (EDF). At each instant in time, there are a number of tasks that need to be

• finished in the near future. A task with a closer deadline gets a higher scheduling priority. The

scheduler needs not only to know the deadline time of all tasks it has to schedule, but also their

duration.

If different tasks in the system request different scheduling policies, the operating system has to make

trade-offs in determining the relative “weight” to give to each of the scheduling algorihtms. These

trade-offs will most probably be quite arbitrary, so porting your application between operating systems

could lead to different scheduling results.

Priority-based scheduling is simple to implement, because just has to look at the tasks in

schedule()

the highest priority queue that are ready to be scheduled, and to start the first one in this queue.

Priority-based scheduling, however, is difficult for the application programmers: they must try to map the

often complex (“high-dimensional”) synchronization interdependencies between the different threads in

their application onto the linear scale offered by priorities! One often-observed phenomenon in real-time

applications that grow over time, is that the programmers tend to raise the priorities of some threads,

every time they notice that the introduction of new functionality (and hence new threads) disturbs the

synchronization of the existing threads. Chapter 14 gives some more examples of the negative effects of

“coupling”, and Chapter 15 discusses time-proven approaches to take care of complex interdependencies.

So, the problem with priority-based scheduling is that it is an indirect way to specify how to cope with

timing and synchronization constraints: at run-time, doesn’t take these constraints

schedule()

themselves into account, but knows only about the priorities, which are the programmer’s indirect model

of the constraints.

In practice, all RTOSs at least offer static priority-based scheduling. Many also implement other

algorithms. Not always because of the intrinsic added value of the algorithm, but rather because of

typical marketing drives: users tend to buy software products with the highest number of features, even if

23

Chapter 2. Task management and scheduling

they risk to drown in the complexity and “feature bloat” (whose implications they often even don’t

understand. . . ). One of the more serious feature bloat examples in priority-based scheduling is the

Section 4.8), that

so-called priority inheritance “solution” to the priority inversion phenomenon (see

occurs when tasks share resources which they should not access concurrently.

2.6. Priority spaces

Many operating systems (especially RTOSs) let all tasks (system tasks as well as user tasks) live in the

same priority space: any task can be given any priority within this space. Others, such as Linux, UNIX,

or Microsoft NT, have separate priority spaces for different kinds of tasks. Linux has two: user space and

kernel space. Tasks running in user space can change their priorities (through the function call),

nice()

but all of them are pre-empted by any task in kernel space. Kernel space itself has three priority levels:

1. Interrupts: the “task” that services a hardware interrupt (timer, network, keyboard, . . . ) has the

highest priority. Such a task is called an interrupt service routine (ISR). (Section 3.3.) It should be as

short as possible, because it runs with all other interrupts disabled. An ISR is not really a task, but

just a function call, and its execution is not determined by the scheduler: the ISR is executed

immediately at the occurrence of an hardware interrupt, by the hardware of the interrupt controller

and the CPU (Section 3.2). The operating system software is not involved at all.

2. Tasklet functions (Linux specific, Section 3.4) and Deferred Service Routines (terminology often

used outside of Linux) are functions (again, not tasks!) that run at the second highest priority. Only

an hardware interrupt can pre-empt them. A tasklet can be activated by any kernel task; a deferred

interrupt function (Section 3.4) is typically triggered by a hardware interrupt service routine, to

further process an interrupt after the ISR has finished. Both have the same properties, and are

executed after all hardware interrupt service routine have finished, and before the “normal” tasks are

scheduled; interrupts are enabled when they run. In contrast to the hardware interrupts, the operating

system software is involved in determining when they are executed.

3. All other kernel tasks run at the lowest priority level in the kernel. They pre-empt every user space

task.

There is no consensus about the relative merits of having separate user and kernel spaces: some consider

it to be a design advantage (“divide et impera”), while others experience it as an unnecessarily artificial

constraint on their flexibility.

2.7. Linux scheduler

The scheduler implemented in the file of the Linux source tree

/usr/src/linux/kernel/sched.c

works with three scheduling modes (which are defined in the POSIX standard): SCHED_RR,

and is the default. The scheduling mode of a task is set

SCHED_FIFO SCHED_OTHER. SCHED_OTHER

by the POSIX system call.

sched_setscheduler()

is the round-robin time slicing algorithm. After a task finishes its time slice, it is moved to

SCHED_RR

the tail of its priority queue, such that another task in the same priority level can start running. If there is

no other task at this priority, the pre-empted task can continue. 24

Chapter 2. Task management and scheduling

is a First-In, First-Out scheduling algorithm: the tasks in one priority level are scheduled

SCHED_FIFO

in the order they get ready to run; once a task is scheduled, it keeps the processor until pre-empted by a

higher priority task, until it releases the processor voluntarily, or until it has to wait to get access to some

resource. This scheduler mode is often called “POSIX soft real-time” because it corresponds to the most

static priorities, but without the other necessary real-time

common real-time scheduling approach with

components.

The behaviour of the scheduler function is not prescribed by the POSIX standard. It is

SCHED_OTHER

meant to give freedom to the operating system programmers to implement their own scheduling

algorithm. In Linux, as in all general-purpose operating systems, the scheduler function

SCHED_OTHER

tries to combine two conflicting performance measures: maximimum throughput and good response to

interactive users. The Linux scheduler calculates a “goodness” value for each candidate task, based on a

number of heuristic rules. Recently, the scheduler function got a lot of attention from the Linux kernel

developers, since a new O(1) (“order one”) scheduling algorithm was introduced. O(1) means that the

function’s computational time does not increase with the number of tasks that must be scheduled. This

has led to a more responsive kernel, certainly in combination with the increased number of pre-emption

points (Section 2.8), which all lead to a call to the scheduler function.

Kernel tasks with the scheduling policy receive the lowest priority, “0”, while the

SCHED_OTHER

and policies can use priority levels from “1” to “99”. User space tasks are

SCHED_RR SCHED_FIFO

always scheduled with the policy. The priority levels 0 to 99 are prescribe in the POSIX

SCHED_OTHER

standard, and the portable POSIX way to find out about the minimum and maximum scheduling

priorities is through the and system

sched_get_priority_min() sched_get_priority_max()

calls. Both take one of the priority policies as their argument.

The scheduling for Symmetric Multi-Processor (SMP) systems is basically the same as for the

uni-processor case. There are some extra function calls to assign a task or an interrupt to a specific

processor, if the programmers desires so. This decision could lead to more efficient execution, because it

increases the chance that the task’s or ISR’s code can permanently be kept in the cache of that particular

processor.

2.8. Linux real-time scheduling

Linux will not become a full-fledged RTOS, for the simple reason that the requirements for a

general-purpose operating system are very different from those of an RTOS. However, soft real-time

additions to the standard Linux kernel have been developed in several places.

One active source of soft real-time efforts has been the audio and video community: in this area, Linux

and Microsoft NT perform poorly, in comparison to, for example, BeOS and Microsoft Windows 95. The

reason is that Linux and Microsoft NT can’t guarantee these multi-media tasks a deterministic share of

the resources (QoS). BeOS does offer QoS scheduling, while Microsoft Windows 95 simply has much

less things to do than a “real” operating system. . . . 25

Chapter 2. Task management and scheduling

Another reason for soft real-time work is the drive to make Linux scale better on multi-processor

systems. In this context, it is important to keep the locks on kernel functionality as small as possible,

because if one processor needs a lock, the other processors are also disturbed in their activity. The

expectation is that the scalability activity will make Linux into an operating system that can almost

guarantee milli-second deadlines (i.e., “soft real time”), without making it into a real RTOS.

Here is a (non-exhaustive) list of efforts to improve on latency problems in the Linux kernel:

Montavista’s Hard Hat Linux (http://www.mvista.com/products/hhl.html) with its pre-emption

• (http://www.mvista.com/dswp/PreemptibleLinux.pdf) patches. These are currently maintained by

Robert Love (http://www.tech9.net/rml/linux/), and gradually introduced in the new 2.5.x kernels. The

idea is to see whether the scheduler could run, at the moment of that a kernel spinlock (Section 4.6.3)

is released, or an interrupt routine (Chapter 3) exits. Commands exist to disable or enable kernel

pre-emption.

Ingo Molnar’s low-latency patches, now maintained by Andrew

• Morton (http://www.zipworld.com.au/~akpm/). They introduce more scheduling

points (http://www.linuxdevices.com/articles/AT8906594941.html) in the kernel code, such that the

time is reduced between the occurrence of an event that requires rescheduling and the actual

rescheduling. Probably, this work will be combined with the pre-emption work mentioned above.

TimeSys Linux/RT (http://www.timesys.com) develops and commercializes the so-called “Resource

• Kernel” loadable module, that makes the standard Linux kernel pre-emptable, and that allows to build

QoS scheduling for user tasks.

KURT (http://www.ittc.ukans.edu/kurt/) (Kansas University Real-Time Linux). KURT Linux allows

• for explicit scheduling of any real-time events rather than just tasks. This provides a more generic

framework onto which normal real-time process scheduling is mapped. Since event scheduling is

handled by the system, addition of new events such as periodic sampling data acquisition cards (video,

lab equipment, etc.) is highly simplified. KURT introduces two modes of operation: the normal mode

and the real-time mode. In normal mode, the system acts as a generic Linux system. When the kernel

is running in real-time mode, it only executes real-time processes. While in real-time mode, the system

can no longer be used as a generic workstation, as all of its resources are dedicated to executing its

real-time responsibilities as accurately as possible.

The LinuxBIOS (http://www.linuxbios.org) project allows to get rid of the usual BIOS chips that

• manage part of the hardware, and that introduce significant delays when booting. A LinuxBIOS

startup can take place in a few seconds, booting immediate in a ready-to-go kernel.

Linux-SRT (http://www.uk.research.att.com/~dmi/linux-srt/) is a QoS scheduler.

• QLinux (http://www.cs.umass.edu/~lass/software/qlinux/) is a QoS scheduler.

• Fairsched (http://fairsched.sourceforge.net/) is a hierarchical QoS scheduler: tasks are divided into

• groups and each group receives guaranteed CPU time allocation proportional to its weight. The

standard scheduler is used to schedule processes within a group.

DWCS (http://www.cc.gatech.edu/~west/dwcs.html) (Dynamic Window-Constrained Scheduling) is a

• QoS scheduler, parameterizing the service in terms of a request period and a window constraint. The

request period is the time interval over which a task must receive some share of the CPU; the window

constraint is the value of that minimum share the task much receive during this “window.” 26

Chapter 3. Interrupts

This Chapter explains the basics of interrupt servicing in a computer system, with again an emphasis on

the real-time application. Interrupt hardware and software come in a great variety of implementations and

functionalities, so some of the concepts talked about in this Chapter may not be relevant to your system.

3.1. Introduction

Interrupts are indispensable in most computer systems with real-time ambitions. Interrupts have to be

processed by a so-called ISR (Interrupt Service Routine). The faster this ISR can do its job, the better the

real-time performance of the RTOS, because other tasks are delayed less. Timers are one example of

peripheral devices that generate interrupts; other such devices are the keyboard, DAQ (Digital

AcQuisition) cards, video cards, the serial and parallel ports, etc. Also the processor itself can generate

interrupts, e.g., to switch to the “protected mode” of the processor, when executing an illegal operation,

as part of a debugging session, or when an “exception” is raised by an application program.

3.2. Interrupt hardware

An interrupt-driven system (which many RTOSs and EOSs are) typically has one or more of the

following hardware components:

Interrupt vector. Many systems have more than one hardware interrupt line (also called interrupt

• request (IRQ), and the hardware manufacturer typically assembles all these interrupt lines in an

“interrupt vector”. The INTEL 80x86 processors’ interrupt vector contains 256 entries, and is called

the Interrupt Description Table (IDT), [Hyde97]. (But most PCs manufacturers make only 16 of these

interrupts available as hardware interrupts! See below.) The interrupt vector is an array of pointers to

the interrupt service routines (Section 3.3) that will be triggered when the corresponding interrupt

occurs. The vector also contains a bit for each interrupt line that signals whether there is an interrupt

pending on that line, i.e., a peripheral device has raised the interrupt, and is waiting to be serviced.

Some processors use non-vectored interrupt processing: when an interrupt occurs, control is transfered

to one single routine, that has to decide what to do with the interrupt. The same strategy is also used,

in software, in most operating systems to allow multiple devices to share the same interrupt.

Synchronous or software interrupt. A synchronous interrupt (also called a software interrupt , or a

• trap) is an interrupt that is not caused by an (asynchronous) hardware event, but by a specific

(synchronous) machine language operation code. Such as, for example the in the Motorola

trap

68000, the in the ARM, the in the Intel 80x86, by a divide by zero, a memory segmentation

swi int

fault, etc. Since this feature is supported in the hardware, one can expect a large number of different,

not standardized, names and functions for software interrupts. . . 27

Chapter 3. Interrupts

Major differences between asynchronous/hardware interrupts and synchronous/software interrupts, on

most hardware, is that:

1. Further interrupts are disabled as soon as an hardware interrupt comes in, but not disabled in the

case of a software interrupt.

2. The handler of a software interrupt runs in the context of the interrupting task; the ISR of an

hardware interrupt has not connected task context to run in. So, the OS provides a context (that

can be the context of the task that happened to be running at the time of the interrupt).

Hardware and software interrupts do share the same interrupt vector, but that vector then provides

separate ranges for hardware and software interrupts.

Edge-triggered and level-triggered interrupts. From a hardware point of view, peripheral devices can

• transmit their interrupt signals in basically two different ways:

Edge-triggered. An interrupt is sent when the interrupt line changes from low to high, or vice versa.

• That is a almost “zero time” event, which increases the chances for a hardware loss of interrupts by

the interrupt controller. Moreover, if multiple devices are connected to the same interrupt line, the

operating system must call all registered interrupt service routines (see Section 3.3), because

otherwise it could cause a software loss of an interrupt: even if it detected only one edge transition,

and its first ISR acknowledged the receipt of this interrupt, it could still be that it missed another

edge transition, so it can only be sure after it has given all ISRs the chance to work. But of course,

this is not an efficient situation.

Level-triggered. An interrupt is signaled by a change in the level on the hardware interrupt line. This

• not only lowers the chance of missing a transition, but it also allows a more efficient servicing of the

interrupts: each ISR that has serviced the interrupt will acknowledge its peripheral device, which

will take away its contribution to the interrupt line. So, the level will change again after the last

peripheral device has been serviced. And the operating system should not try all ISRs connected to

the same hardware interrupt line.

Interrupt controller. This is a piece of hardware that shields the operating system from the electronic

• details of the interrupt lines. Some controllers are able to queue interrupts, such that none of them gets

lost (up to a given hardware limit, of course). Some allow various ways of configuring priorities on the

different interrupts.

The 8259 (http://www.cast-inc.com/cores/c8259a/c8259a-x.pdf) Programmable Interrupt

Controller (PIC) is still a very common chip for this job on PC architectures, despite its age of more

than 25 years. PC builders usually use two PICs, since each one can cope with only eight interrupts.

But using more than one has to happen in a daisy chain, i.e., the interrupt output pin of the first PIC is

connected to an input pin of the second one; this introduces delays in the interrupt servicing. As

another disadvantage, PICs were not designed to be used in multiprocessor systems.

Higher-quality and/or SMP motherboards use the Advanced Programmable Interrupt

Controller (APIC). This is not just a single chip, but a small hardware system that manages

interrupts:

Each CPU must have a “local APIC” with which it gets interrupts from the APIC system.

• 28

Chapter 3. Interrupts

The peripheral hardware connects its interrupt line to the I/O APIC. (There can be eight of them.)

• An I/O APIC then sends a signal to the local APIC of the CPU for which the interrupt is meant.

The APIC architecture is better than the PIC, because (i) it can have many more interrupts lines, hence

eliminating the need to share interrupts, (ii) it knows programmable interrupt priorities, (iii) it is faster

to program (only one machine instruction to the local APIC’c Task Priority Register (which is on the

CPU!), instead of two to the PIC, which in addition is not on the CPU) and (iv) it allows to work with

level-triggered interrupts instead of with edge-triggered interrupts. The PCI bus uses active low,

level-triggered interrupts, so can work fine together with APIC.

The PowerPC platforms have another interrupt hardware standard, the

OpenPIC (http://www.itis.mn.it/inform/materiali/evarchi/cyrix.dir/opnparc.htm), which also

guarantees a high hardware quality. OpenPIC also works with x86 architectures.

3.3. Interrupt software

From the software side, an interrupt-driven system must typically take into account one or more of the

following software issues:

Interrupt Service Routine (ISR), often called interrupt handler tout court. This software routine is

• called when an interrupt occurs on the interrupt line for which the ISR has been registered in the

interrupt vector. Typically, this registration takes place through a system call to the operating system,

but it can also be done directly in a machine instruction, by a sufficiently privileged program. The

registration puts the address of the function to be called by the interrupt, in the address field provided

in the interrupt vector at the index of the corresponding interrupt number.

The operating system does not (or rather, cannot) intervene in the launching of the ISR, because

everything is done by the CPU. The context of the currently running task is saved on the stack of that

task: its address is in one of the CPU registers, and it is the only stack that the CPU has immediate and

automatic access to. This fact has influence on the software configuration of the system: each task

must get enough stack space to cope with ISR overhead. So, the worst-case amount of extra stack

space to be foreseen in a task’s memory budget can grow large, especially for systems in which

interrupts can be nested. More and more operating systems, however, provide a separate “context” for

interrupt servicing, shared by all ISRs; examples are Linux and VxWorks.

An ISR should be as short as possible, because it runs with interrupts disabled, which prevents other

interrupts from being serviced, and, hence, other tasks from proceeding. The ISR should service the

peripheral device it was triggered by, and then return. This servicing typically consists of reading or

writing some registers on the device, and buffer them in a place where some other task can process

them further, outside of the ISR and hence with interrupts enabled again. This further processing is the

goal of the DSR (Deferred Service Routine), Section 3.4. Getting the data from the ISR to the DSR

should be done in a non-blocking way; FIFOs (Section 5.2) or circular buffers (Section 5.4) are often

used for this purpose. 29

Chapter 3. Interrupts

Trap handler/service request. A synchronous interrupt is sometimes also called a trap

• (http://www.osdata.com/topic/language/asm/trapgen.htm) or a software interrupt. The software

interrupts are “called” by the processor itself, such as in the case of register overflow, page address

errors, etc. They work like a hardware interrupts (saving state, switching to protected mode, jumping

to handler), but they run with the hardware interrupts enabled.

These software interrupts are very important, because they are the only means with which user space

tasks can execute “protected operations,” or “privileged instructions.” These privileged instructions

can only be executed when the processor is in its protected mode (also called privileged mode).

Privileged instructions are operations such as: to address physical IO directly; to work with the

memory management infrastructure such as the page lookup table; to disable and enable interrupts; or

to halt the machine. Privileged instructions are available to user space tasks via system calls, that are in

fact handlers of software interrupts: a system call puts some data in registers or on the stack, and then

executes a software interrupt, which makes the processor switch to protected mode and run the

interrupt handler. That handler can use the register or stack data for task specific execution. Recall that

the handler of a software interrupt runs in the context of the task that executes the system call, so it can

read the data that the task has put on the stack. But now the execution takes place in the protected

mode of the processor.

A system call is just one example of a software interrupt, or trap. An interrupt service routine of a trap

is often called a trap handler . Still another name for a software interrupt is a service request (SRQ).

Each type of CPU has part of its interrupt vector reserved for these trap handlers. Operating systems

typically have a default trap handler installed, which they attach to all possible software interrupts in

your system. Usually, you can replace any of these by your own. For example, RTAI has a

for this purpose. The OS also reserves a number of traps as system

rt_set_rtai_trap_handler()

signals. For example, RTAI reserves 32 signals, most of them correspond to what standard Linux uses.

Trap handlers are a major tool in debugging: compiling your code with the debug option turned on

results, among other things, in the introduction in (the compiled version of) your original code of

machine instructions that generate a trap after each line in your code. The ISR triggered by that trap

can then inform the debug task about which “breakpoint” in your program was reached, thanks to the

3.3).

register information that the trap has filled in (Section

Another major application of the trap functionality, certainly in the context of this document, is their

use by RTAI to deliver user space hard real-time functionality (Section 11.5). Linux just uses one

single software interrupt, at address for its user space system calls, leaving a lot of software

0x80,

interrupts to applications such as RTAI.

Interrupt latency. This is the time between the arrival of the hardware interrupt and the start of the

• execution of the corresponding ISR. The latency is not a crisp number, but rather a statistical quantity

becaused it is influenced by a large number of undeterministic effects (Section 1.3.2). This becomes

more and more the case in modern processors with their multiple levels of caches and instruction

pipelines, that all might need to be reset before the ISR can start. This latter fact is at the origin of the

30

Chapter 3. Interrupts

somewhat counter-intuitive phenomenon that some modern Gigahertz CPUs have longer interrupt

latencies than much older digital signal processors.

Interrupt enable/disable. Each processor has atomic operations to enable or disable (“mask”) the

• interrupts. Common names for these functions are (“set interrupt enable flag”, i.e., enable

sti()

interrupts to come through to interrupt the CPU) and (“clear interrupt enable flag”, i.e., don’t

cli()

allow interrupts). In the same context, one finds functions like and

save_flags()

. These are a bit more fine-grained than and , in the sense that they

restore_flags() sti() cli()

save/restore a bit-sequence where each bit corresponds to an hardware interrupt line and indicates

whether or not the interrupt on that particular line should be enabled. In other words, it saves the

“state” of the interrupt vector. ( in some cases does an implicit enabling of the

restore_flags()

interrupts too.) Note that disables all interrupts on all processors in an SMP system. That is a

cli()

costly approach to use, particularly so in an RTOS.

Interrupt priorities. Some systems offer, as a hardware feature, (static) priorities to interrupts. That

• means that the OS blocks a new interrupt if an ISR of an interrupt with a higher priority is still

running. (Or rather, as long has it has not enabled the interrupts again.) Similarly, the ISR of a

lower-priority interrupt is pre-empted when a higher-priority interrupt comes in. Hence, ISRs must be

re-entrant. And, if the processor allows interrupt priorities, most opportunities/problems that are

known in task scheduling (see Section 2.4) show up in the interrupt handling too!

Prioritized interrupt disable. Prioritized enabling/disabling of the interrupts is a software feature (that

• must have hardware support, of course) that allows the programmer to disable interrupts below a

specified priority level. Microsoft NT is an example of an OS kernel that extensively uses this feature.

Interrupt nesting. If the processor and/or operating system allow interrupt nesting, then an ISR

• servicing one interrupt can itself be pre-empted by another interrupt (which could come from the same

peripheral device that is now being serviced!). Interrupt nesting increases code complexity, because

ISRs must use re-entrant code only, i.e., the ISR must be written in such a way that it is robust against

being pre-empted at any time.

Interrupt sharing. Many systems allow different peripheral devices to be linked to the same hardware

• interrupt. The ISR servicing this interrupt must then be able to find out which device generated the

interrupt. It does this by (i) checking a status register on each of the devices that share the interrupt, or

(ii) calling in turn all ISRs that users have registered with this IRQ.

Interrupt sharing is implemented in most general purpose operating systems, hence also in the Linux

kernel. (See the file .) Linux accepts multiple interrupt handlers on the same

/kernel/softirq.c

interrupt number. The kernel hangs its own ISR on the hardware interrupt, and that kernel ISR invokes

one by one all the handler routines of the ISRs that have been registered by the application programs.

This means that they will be executed after the hardware ISR has finished, but before any other tasks,

and with interrupts enabled.

While the Linux kernel does interrupt sharing as mentioned in the previous paragraph, RTLinux and

RTAI don’t: they allow only one single ISR per IRQ, in order to be as deterministic as possible. (So,

be careful when putting interface cards in your computer, because all the ones for which you want to

install real-time drivers must be connected to different interrupt lines!) The real-time ISR that a user

program has registered is directly linked to the hardware interrupt, and hence runs with all interrupts

disabled. The other ISRs on that same IRQ are only executed when the non-real-time Linux kernel on

top of the RTOS gets the occasion to run, i.e., after all real-time activity is done, also non-ISR activity.

31

Chapter 3. Interrupts

3.4. ISR, DSR and ASR

An ISR should be as short as possible, in order to minimize the delay of interrupts to other ISRs, and the

scheduling of tasks. In general-purpose operating systems, only the ISR that the OS has attached to each

IRQ runs with interrupts disabled, but not the user-registered ISRs. A real real-time operating system, on

the other hand, allows only one ISR per IRQ, otherwise, the time determinism of the other ISRs is not

guaranteed! This makes the job for real-time programmers a bit easier, because they can design

non-re-entrant (and hence often simpler and faster) ISRs: the ISR can store local information without the

danger that it can be overwritten by another invocation of the same ISR code, and it has the guarantee of

atomicity, (i.e., the ISR will run without being pre-empted. However, when the OS and the hardware

allow interrupt priorities, the ISR at one IRQ level can be pre-empted by a higher-priority interrupt.

Typically, a hardware ISR just reads or writes the data involved in the communication with the peripheral

device or the trap that caused the interrupt, acknowledges the interrupt if the peripheral device requires it,

and then, if needed, wakes up another “task” to do any further processing. For example, most drivers for

network cards just transfer the raw packet data to or from the card in the ISR, and delegate all

interpretation of the data to another task. The skeleton of a typical ISR-DSR combination would look

like this:

dsr_thread()

{ while (1) {

wait_for_signal_from_isr();

process_data_of_ISR (); // including all blocking stuff

}

}

interrupt_handler( )

{ reset_hardware();

do_isr_stuff();

send_signal_to_wake_up_dsr();

re_enable_interrupts() // some RTOSs do this automatically

}

In the Linux kernel, this latter task used to be a bottom half, while the hardware interrupt-driven ISR was

called the top half . (Note that some operating systems use opposite terminology.) The bottom half

concept is more or less abandoned, and replaced by tasklets and softirqs (see the files

and ). The reason for abandoning the bottom

include/linux/interrupt.h kernel/softirq.c

halves is that Linux has a hard limit of maximum 32 bottom halves functions. Moreover, they run with

locks over the whole system, which is not very good for multi-processor systems. The softirq was 32

Chapter 3. Interrupts

introduced in the 2.3.43 kernel, as a multi-processor-aware version of the bottom half; there are still only

32 of them, so application programmers should stay away from them, and use tasklets instead.

(Tasklet are a very appropriate primitive in the context of interrupt servicing, but its usefulness is in no

way limited to only this context!)

“Tasklet” is a bit of an unfortunate name, because it has not much to do with schedulable tasks: a tasklet

is a function that the kernel calls when an ISR has requested a “follow-up” of its interrupt servicing.

Outside of the Linux world, this follow-up function is more often called DSR, Deferred Service

Routine, or (in Microsoft NT), Deferred Processing Call. In Linux, an unlimited number of tasklets is

allowed, and they have the same behaviour and functionality as the softirqs.

In Linux the ISR requests the execution of a tasklet/DSR via the tasklet_schedule(&tasklet)

command. The tasklet has first to be initialized with a tasklet_init (&tasklet,

; this call links a tasklet identifier with the function to be executed and a

tasklet_function, data)

data structure in which the ISR can store information for processing by the tasklet. The tasklet (or DSR,

or softirq) runs with interrupts enabled, but outside of the context of a particular task, just as the ISR that

has requested it. This means that neither the ISR, nor the DSR can use variables that have been defined

locally in the scope of the task(s) to which they logically are related. The execution of tasklets is

implemented in the file, and both tasklets and softirqs are treated as softirq

kernel/softirq.c

tasks.

Linux (and many other general purpose operating systems) executes the DSRs in sequence (without

mutual pre-emption, that is), at the end of hardware ISRs and before the kernel returns to user space. So,

at the end of each kernel call, the scheduler checks whether some DSRs are ready to be executed; see the

file in the Linux source code.

kernel/softirq.c

RTAI also has tasklets, and their semantics is more or less like the Linux tasklets. However, RTAI added

some extra features (see the files and ):

include/rtai_tasklets.h tasklets/tasklets.c

There is a special class of tasklets, called timers. They can be used to let context-independent

• functions run with specified timings.

RTAI also allows a user space function to be executed as a tasklet.

The RTAI tasklets are executed by a dedicated task and/or ISR in the RTAI kernel.

An ISR is not allowed to use semaphores or any other potentially blocking system calls: an ISR that

blocks on a lock held by another task causes big trouble, because all interrupts are disabled when the ISR

runs, such that the condition to wake up the other task might never occur. The same holds for RTAI

tasklets: a blocking tasklet also blocks the timer ISR or task that executes all tasklets.

Avoiding non-blocking calls is sufficient for maximum determinism in a UP (“uni-processor”) system. In

a multi-processor system, however, a race condition (see Section 4.2) can occur between the hardware

ISR on one processor, and any other task on one of the other processors; e.g., because the ISR and the

other task access shared data. (Remember that the ISR cannot use a lock!) The easiest solution is to not

33

Chapter 3. Interrupts

only mask the interrupts for one processor, but for all of them. This, however, prevents all processors

from working. One way around this are spinlocks (see Section 4.6.3). The operating system also helps a

bit, by guaranteeing that tasklets are serialized over all processors in the system; i.e., only one is

executed at a time.

Some operating systems have one more level of interrupt sharing: besides the ISR and DSR functions,

they offer the possibility to use Asynchronous Service Routines (ASR). (This name is not as standardized

as ISR and DSR.) In Microsoft NT, it is called an Asynchronous Procedure Call; eCos calls it DSR;

Linux doesn’t have the concept. ASRs can run after all DSRs have finished, but before normal tasks get a

chance to be scheduled. Their goal is to execute that part of the reaction to an interrupt, that needs the

thread’s context; for example, to make the thread stop some of its activities, including itself.

The eCos operating system executes an ASR with interrupts enabled, with the scheduler disabled, and

always in the context of one specific thread. So, the ASR can call all system functions, which is not the

case for ISR and DSR, which are not bound to a deterministically defined context.

The RTAI operating system gives the possibility to add to each real-time task a user-defined function that

runs in the task’s context and with interrupts disabled, every time that the task gets scheduled (hence, not

just when an interrupt has occurred). This allows, for example, an interrupt servicing to indirectly change

some task-specific attributes at each scheduling instant. This user-defined function is called “ ”

signal()

and is filled in by (XXX ???) in the task data structure. However, it’s just a pointer to

rt_task_init()

a function, so it could be filled in or changed on-line. 34

Chapter 4. IPC: synchronization

The decision about what code to run next is made by the operating system (i.e., its scheduler), or by the

hardware interrupts that force the processor to jump to an associated interrupt routine. To the scheduler

of the OS, all tasks are just “numbers” in scheduling queues; and interrupts “talk” to their own interrupt

service routine only. So, scheduler and interrupts would be sufficient organizational structure in a system

where all tasks just live next to each other, without need for cooperation. This, of course, is not sufficient

for many applications. For example, an interrupt service routine collects measurements from a peripheral

device, this data is processed by a dedicated control task, the results are sent out via another peripheral

device to an actuator, and displayed for the user by still another task.

Hence, the need exists for synchronization of different tasks (What is the correct sequence and timing to

execute the different tasks?), as well as for data exchange between them. Synchronization and data

exchange are complementary concepts, because the usefulness of exchanged data often depends on the

correct synchronization of all tasks involved in the exchange. Both concepts are collectively referred to

as Interprocess communication (“IPC”).

The role of the operating system in matters of IPC is to offer a sufficiently rich set of IPC-supporting

primitives. These should allow the tasks to engage in IPC without having to bother with the details of

their implementation and with hardware dependence. This is not a minor achievement of the operating

system developers, because making these IPC primitives safe and easy to use requires a lot of care and

insight. In any case, the current state-of-the-art in operating systems’ IPC support is such that they still

don’t offer much more than just primitives. Hence, programmers have to know how to apply these

primitives appropriately when building software systems consisting of multiple concurrent tasks; this

often remains a difficult because error-prone design and implementation job. Not in the least because no

one-size-fits-all solution can exist for all application needs.

4.1. IPC terminology

The general synchronization and data exchange problems involve (at least) two tasks, which we will call

the “sender” and the “receiver”. (These tasks are often also called “writer” and “reader”, or “producer”

and “consumer”.) For synchronization, “sender” and “receiver” want to make sure they are both in (or

not in) specified parts of their code at the same time. For data exchange, “sender” and “receiver” want to

make sure they can exchange data efficiently, without having to know too much of each other

Chapter 14), and according to several different policies, such as blocking/non-blocking,

(“decoupling”,

or with/without data loss.

Data exchange has a natural direction of flow, and, hence, the terminology “sender” and “receiver” is

appropriate. Synchronization is often without natural order or direction of flow, and, hence, the

terminology “sender” and “receiver” is less appropriate in this context, and “(IPC) client” might be a

more appropriate because symmetric terminology. Anyway, the exact terminology doesn’t matter too

much. Unless we want to be more specific, we will use the generic system calls and

send() receive()

to indicate the IPC primitives used by sender and receiver, respectively. 35

Chapter 4. IPC: synchronization

4.1.1. Blocking/Non-blocking

IPC primitives can have different effects on task scheduling:

Blocking. When executing the part of the IPC, the sender task is blocked (i.e., non-available

send()

• for scheduling) until the receiver has accepted the IPC in a call. And similarly the other

receive()

way around. If both the sender and the receiver block until both of them are in their and

send()

commands, the IPC is called synchronous. (Other names are: rendez-vous, or handshake.)

receive()

Synchronous IPC is the easiest to design with, and is very similar to building hardware systems.

Non-blocking (asynchronous). Sender and receiver are not blocked in their IPC commands. This

• means that there is incomplete synchronization: the sender doesn’t know when the receiver will get its

message, and the receiver cannot be sure the sender is still in the same state as when it sent the

message.

Blocking with time out. The tasks wait in their IPC commands for at most a specified maximum

• amount of time.

Conditional blocking. The tasks block in their IPC commands only if a certain condition is fulfilled.

Of course, blocking primitives should be used with care in real-time sections of a software system.

4.1.2. Coupling

IPC primitives can use different degrees of coupling:

Named connection: sender and receiver know about each other, and can call each other by name. That

• means that the sender fills in the unique identifier of the receiver in its command, and vice

send()

versa. This can set up a connection between both tasks, in a way very similar to the telephone system,

where one has to dial the number of the person one wants to talk to.

The connection can be one-to-one, or one-to-many (i.e., the single sender sends to more than one

receiver, such as for broadcasting to a set of named correspondents), or many-to-one (for example,

many tasks send logging commands to an activity logging task), or many-to-many (for example, video

conferencing).

Broadcast: the sender sends its message to all “listeners” (without explicitly calling them by name) on

• the (sub-branch of the) network to which it is connected. The listeners receive the message if they

want, without the sender knowing exactly which tasks have really used its message.

Blackboard: while a broadcast is a message on a network-like medium (i.e., the message is not stored

• in the network for later use), a blackboard IPC stores the messages from different senders. So,

receivers can look at them at any later time.

Object request broker (ORB): the previous types of IPC all imply a rather high level of coupling

• between sender and receiver, in the sense that they have to know explicitly the identity of their

communication partner, of the network branch, or of the blackboard. The current trend towards more

distributed and dynamically reconfigurable computer systems calls for more loosely-coupled forms of

IPC. The ORB concept has been developed to cover these needs: a sender component registers its 36

Chapter 4. IPC: synchronization

interface with the ORB; interested receivers can ask the broker to forward their requests to an

appropriate sender (“server”) component, without the need to know its identity, nor its address.

4.1.3. Buffering

IPC primitives can use different degrees of buffering, ranging from the case where the operating system

stores and delivers all messages, to the case where the message is lost if the receiver is not ready to

receive it.

Not all of the above-mentioned forms of IPC are equally appropriate for real-time use, because some

imply too much and/or too indeterministic overhead for communication and resource allocation.

4.2. Race conditions and critical sections

Often, two or more tasks need access to the same data or device, for writing and/or reading. The origin of

most problems with resource sharing (or resource allocation) in multi-tasking and multi-processor

systems is the fact that operations on resources can usually not be performed atomically, i.e., as if they

were executed as one single, non-interruptable instruction that takes zero time. Indeed, a task that

interfaces with a resource can at any instant be pre-empted, and hence, when it gets re-scheduled again, it

cannot just take for granted that the data it uses now is in the same state (or at least, a state that is

consistent with the state) before the pre-emption. Consider the following situation:

data number_1;

data number_2;

task A

{ data A_number;

A_number = read(number_1);

A_number = A_number + 1;

write(number_2,A_number);

}

task B

{ if ( read(number_1) == read(number_2) )

do_something();

else

do_something_else();

}

} takes different actions based on the (non-)equality of and But

number_1 number_2.

task B task B

can be pre-empted in its statement by , exactly at the moment that has already read

if task A task B 37

Chapter 4. IPC: synchronization

but not yet This means that it has read before the pre-emption,

number_1, number_2. number_1

and after the pre-emption, which violates the validity of the test.

number_2

The statement is one example of a so-called critical section: it is critical to the validity of the code

if

that the access to the data used in that statement (i.e., and be executed

number_1 number_2)

atomically, i.e., un-interruptable by anything else. (Most) machine code instructions of a given processor

execute atomically; but instructions in higher-level programming languages are usually translated into a

sequence of many machine code instructions, such that atomicity cannot be guaranteed.

There are three generic types of critical sections:

Access to the same data from different tasks, as illustrated by the example above.

• Access to a service. For example, allocation of a resource, execution of a “transaction” on a database.

• The service typically has to process a sequence of queries, and these have to succeed as a whole, or

fail as a whole.

Access to procedure code. Application tasks often run exactly the same code (for example the control

• algorithm in each of the joints of a robot), but on other data, and some parts of that code should be

executed by one task at a time only.

Of course, many applications involve combinations of different resource sharing needs.

The problem in all above-mentioned examples of access to shared resources is often called a race

condition: two or more tasks compete (“race”) against each other to get access to the shared resources.

Some of these race conditions have been given a special name:

Deadlock. has locked a resource and is blocked waiting for a resource that is locked by

Task A task

• , while is blocked waiting for the resource that is locked by .

B task B task A

Livelock. This situation is similar to the deadlock, with this difference: both tasks are not blocked but

• are actively trying to get the resource, in some form of busy waiting.

Starvation. In this situation, some tasks never get the chance to allocate the resource they require,

• because other tasks always get priority.

The four conditions that have to be satisfied in order to (potentially!) give rise to a deadlock are:

1. Locks are only released voluntarily by tasks. So, a task that needs two locks might obtain the first

lock, but block on the second one, so that it is not able anymore to voluntarily release the first lock.

2. Tasks can only get in a deadlock if they need more than one lock, and have to obtain them in a

(non-atomic) sequential order.

3. The resources guarded by locks can only be allocated to one single task. (Or to a finite number of

tasks.)

4. Tasks try to obtain locks that other tasks have already obtained, and these tasks form a circular list.

For example, is waiting for to release a lock, is waiting for to

task A task B task B task C

release a lock, and is waiting for .

task C task A 38

Chapter 4. IPC: synchronization

As soon as one of these four conditions is not satisfied, a deadlock can not occur. Moreover, these

conditions are not sufficient for deadlocks to occur: they just describe the conditions under which it is

possible to have deadlocks.

The literature contains many examples of deadlock avoidance and prevention algorithms. Deadlock

avoidance makes sure that all four necessary conditions are never satisfied at the same time; deadlock

prevention allows the possibility for a deadlock to occur, but makes sure that this possibility is never

realized. Both kinds of algorithms, however, often require some form of “global” knowledge about the

states of all tasks in the system. Hence, they are too indeterministic for real-time execution, and not

suitable for component-based design (because the requirement for global knowledge is in contradiction

with the loose coupling strived for in component systems (see Chapter 14).

There are some guaranteed deadlock avoidance algorithms, that are reasonably simple to implement. For

example, a deadlock cannot occur if all programs always take locks in the same order. This requires a

globally known and ordered lists of locks, and coding discipline from the programmers. Other prevention

algorithms use some of the following approaches: only allow each task to hold one resource; pre-allocate

resources; force release of a resource before a new request can be made; ordering all tasks and give them

priority according to that order.

Race conditions occur on a single processor system because of its multi-tasking and interrupt

functionalities. But they show up even more on multi-processor systems: even if one CPU is preventing

the tasks that it runs from accessing a resource concurrently, a task on another CPU might interfere.

4.3. Signals

Signals are one of the IPC synchronization primitives used for asynchronous notification: one task fires a

signal, which can cause other tasks to start doing thins. The emphasis is on “asynchronous” and on

“can”:

Asynchronous: the tasks that react to signals are in a completely arbitrary state, unrelated with the

• signaling task. Their reaction to the signal also need not be instantaneous, or synchronized, with the

signaling task. The task that sends the signal, and the tasks that use the signal, need not share any

memory, as in the case of semaphores or mutexes. This makes signals about the only synchronization

primitive that is straightforward to scale over a network.

Can: the signaling task fires a signal, and continues with its job. Whether or not other tasks do

• something with its signal is not of its concerns. The operating system takes care of the delivery of the

signal, and it nobody wants it, it is just lost.

In most operating systems, signals

are not queued. A task’s signal handler has no means to detect whether it has been signaled more than

• once.

carry no data.

• 39

Chapter 4. IPC: synchronization

have no deterministic delivery time or order. A task that gets signaled is not necessarily scheduled

• immediately.

have no deterministic order. A task that gets signaled multiple times has no way to find out in which

• temporal order the signals were sent.

So, these are reasons to avoid signals for synchronization between two running tasks, [BrinchHansen73].

In other words: notification in itself is not sufficient for synchronization. Synchronization needs two tasks

that do something together, while taking notice of each other, and respecting each other’s activities. Later

sections of the text present IPC primitives that are better suited for synchronization than signals.

POSIX has standardized signals and their connection to threads. The OS offers a number of pre-defined

signals (such as “kill”), and task can ask the operating system to connect a handler (i.e., a function) to a

particular signal on its behalf. The handler is “registered”, using the system call . The task

sigaction()

also asks the OS to receive or block a specific subset of all available signals; this is its “signal mask”.

Whenever a signal is received by the operating system, it executes the registered handlers of all tasks that

have this signal in their mask. The task can also issue a , which makes it sleep until

sigwait(signal)

the is received; in this case, the signal handler is not executed. Anyway, signals are a bit difficult

signal

to work with, as illustrated by this quote from the man page:

signal

For to work reliably, the signals being waited for must be blocked in all threads, not only in the

sigwait

calling thread, since otherwise the POSIX semantics for signal delivery do not guarantee that it’s the thread

doing the that will receive the signal. The best way to achieve this is block those signals before any

sigwait

threads are created, and never unblock them in the program other than by calling .

sigwait

The masks are also set on a per-thread basis, but the signal handlers are shared between all threads in a

process. Moreover, the implementation of signals tend to differ between operating systems, and the

POSIX standard leaves room for interpretation of its specification. For example, it doesn’t say anything

about the order in which blocked threads must be woken up by signals. So, these are reasons why many

developers don’t use signals too much.

POSIX has a specification for so-called “real-time signals” too. Real-time signals are queued, they pass a

4-byte data value to their associated signal handler, and they are guaranteed to be delivered in numerical

order, i.e., from lowest signal number to highest. For example, RTLinux implements POSIX real-time

signals, and offers 32 different signal levels. (See the file in the RTLinux

include/rtl_sched.h

source tree.) And RTAI also offers a 32 bit unsigned integer for events, but in a little different way: the

integer is used to allow signalling multiple events: each bit in the integer is an event, and a task can ask to

be notified when a certain AND or OR combination of these bits becomes valid. (See the file

in the RTAI source tree.)

include/rtai_bits.h

4.4. Exceptions

Exceptions are signals that are sent (“raised”) synchronously, i.e., by the task that is currently running.

(Recall that signals are asynchronous, in the sense that a task can receive a signal at any arbitrary

moment in its lifetime.) Exceptions are, roughly speaking, a signal from a task to itself. As operating

Section 3.1) used to handle non-normal cases

system primitive, an exception is a software interrupt (see

in the execution of a task: numerical errors; devices that are not reachable or deliver illegal messages;

etc. The software interrupt gives rise to the execution of an exception handler, that the task (or the 40

Chapter 4. IPC: synchronization

operating system, or another task) registered previously. In high-level programming languages, an

exception need not be a software interrupt, but it is a function call to the language’s runtime support, that

will take care of the exception handling.

4.5. Atomic operations

The concept of an atomic operation is very important in interprocess communication, because the

operating system must guarantee that the taking or releasing a lock is done without interruption. That can

only be the case if the hardware offers some form of atomic operation on bits or bytes. Atomic

operations come in various forms: in the hardware, in the operating system, in a language’s run-time, or

in an application’s support library, but always, the hardware atomic operation is at the bottom of the

atomic service. This Section focuses on the hardware support that is commonly available.

Most processors offer an atomic machine instruction to test a bit (or a byte or a word). In fact, the

operation not just tests the bit, but also sets the bit if that bit has not already been set. Hence, the

associated assembly instruction is often called , or something similar. Expressed in

test_and_set()

pseudo-code, the would look like this:

test_and_set()

int test_and_set(int *lock){

int temp = *lock;

*lock = 1;

return temp;

}

Another atomic instruction offered by (a fewer number of) processors is

: it compares a value at a given memory address with an

compare_and_swap(address,old,new)

“old” value given as parameter, and overwrites it with a “new” value if the compared values are the

same; in this case, it returns “true”. If the values are not equal, the new value is copied over the old value.

Examples of processors with a are the Alpha, ia32/ia64, SPARC and the

compare_and_swap()

M68000/PowerPC. (Look in the Linux source tree for the macro to find

__HAVE_ARCH_CMPXCHG

them.)

The operation is appropriate for the implementation of the synchronization

compare_and_swap()

needed in, for example, swinging pointers (see Section 4.10): in this case, the parameters of the

are the address of the pointer and its old and new values.

compare_and_swap(address,old,new)

The following pseudo-implementation is simplest to understand the semantics of the

:

compare_and_swap()

int compare_and_swap(address, old, new) {

get_lock();

if (*address == old) {

*address == new;

release_lock(); 41

Chapter 4. IPC: synchronization

return (1);

} else {

release_lock();

return (0);

};

The can, however, be implemented without locks, using the following pair of

compare_and_swap()

atomic instructions: and . Together, they implement an

load_linked() store_conditional()

atomic read-modify-write cycle. The idea is that the instruction marks a memory

load_linked()

location as “reserved” (but does not lock it!) and if no processor has tried to change the contents of that

memory location when the takes place, the store will succeed, otherwise it will

store_conditional()

fail. If it fails, the calling task must decide what to do next: retry, or do something else.

This pair of instructions can be used to implement in an obvious way, and

compare_and_swap()

without needing a lock:

int compare_and_swap(address, old, new) {

temp = load_linked(address);

if (old == temp) return store_conditional(address,new);

else return;

}

The test need not take place in a critical section, because both arguments are local to this

old == temp

single task.

There are some important caveats with the function:

compare_and_swap()

It only compares the values at a given memory location, but does not detect whether (or how many

• times) this value has changed! That is: a memory location can be changed twice and have its original

value back. To overcome this problem, a more extensive atomic operation is needed, the

, which also checks a tag attached to the pointer, and that

double_word_compare_and_swap()

increments the tag at each change of the value of the pointer. This operation is not very common in

processors!

It is not multi-processor safe: (TODO: why exactly?)

While the hardware support for locks is quite satisfactory, there is no support for transaction rollback.

Transaction rollback means that the software can undo the effects of a sequence of actions, in such a way

that the complete sequence takes place as a whole, or else is undone without leaving any trace.

Transaction rollback is a quite advanced feature, and not supported by operating systems; it’s however a

primary component of high-end database servers. 42

Chapter 4. IPC: synchronization

4.6. Semaphore, mutex, spinlock, read/write lock, barrier

Race conditions can occur because the access to a shared resource is not well synchronized between

different tasks. One solution is to allow tasks to get a lock on the resource. The simplest way to lock is to

disable all interrupts and disable the scheduler when the task wants the resource. This is certainly quite

effective for the running task, but also quite drastic and far from efficient for the activity of all other

tasks. Hence, programmers should not use these methods lightly if they want to maintain real

multi-tasking in the system. So, this text focuses on locking mechanisms that do not follow this drastic

approach. Basically, programmers can choose between two types of locking primitives (see later sections

for more details):

1. One based on busy waiting. This method has overhead due to wasting CPU cycles in the busy

waiting, but it avoids the overhead due to bookkeeping of queues in which tasks have to wait.

2. One based on the concept of a semaphore. This method has no overhead of wasting CPU cycles, but

it does have the overhead of task queue bookkeeping and context switches.

A generic program that uses locks would look like this:

data number_1;

data number_2;

lock lock_AB;

task A

{ data A_number;

get_lock(lock_AB);

A_number = read(number_1);

A_number = A_number + 1;

write(number_2,A_number);

release_lock(lock_AB);

}

task B

{ get_lock(lock_AB);

i = ( read(number_1) == read(number_2) );

release_lock(lock_AB);

if ( i ) do_something();

else do_something_else();

}

}

The and function calls do not belong to any specific programming

get_lock() release_lock()

language, library or standard. They have just been invented for the purpose of illustration of the idea.

When either or reaches its so-called critical section, it requests the lock; it gets the lock

task A task B

if the lock is not taken by the other task, and can enter the critical section; otherwise, it waits (“blocks”,

“sleeps”) till the other task releases the lock at the end of its critical section. A blocked task cannot be

scheduled for execution, so locks are to be used with care in real-time applications: the application

programmer should be sure about the maximum amount of time that a task can be delayed because of 43

Chapter 4. IPC: synchronization

locks held by other tasks; and this maximum should be less that specified by the timing constraints of the

system.

The should be executed atomically, in order to avoid a race condition when both tasks try

get_lock()

to get the lock at the same time. (Indeed, the lock is in this case an example of a shared resource, so

locking is prone to all race conditions involved in allocation of shared resources.) The atomicity of

getting a lock seems to be a vicious circle: one needs a lock to guarantee atomicity of the execution of

the function that must give you a lock. Of course, (only) the use of an atomic machine instruction can

break this circle. Operating systems implement the function by means of a atomic

get_lock()

machine instruction (see Section 4.5) on a variable associated with the lock.

test_and_set()

Another effective (but not necessarily efficient!) implementation of a lock is as follows (borrowed from

the Linux kernel source code):

int flags;

save_flags(flags); // save the state of the interrupt vector

cli(); // disable interrupts

// ... critical section ...

restore_flags(flags); // restore the interrupt vector to

// its original state

sti(); // enable interrupts

(Note that, in various implementations, implicitly uses .)

restore_flags() sti()

The implementation described above is not always efficient because: (i) in SMP systems the turns

cli()

Section 3.1), and (ii) if a can do the job, one should

off interrupts on all CPUs (see test_and_set()

use it, because the disabling of the interrupts and the saving of the flags generate a lot of overhead.

The lock concept can easily lead to unpredictable latencies in the scheduling of a task: the task can sleep

while waiting for a lock to be released; it doesn’t have influence on how many locks other tasks are

using, how deep the locks are nested, or how well-behaved other tasks use locks. Both tasks involved in a

synchronization using a lock have (i) to agree about which lock they use to protect their common data (it

must be in their common address space!), (ii) to be disciplined enough to release the lock, and (iii) to

keep the critical section as short as possible. Hence, the locks-based solution to access or allocation

constraints is equally indirect and primitive as the priority-based solution to timing constraints: it doesn’t

protect the data directly, but synchronizes the code that accesses the data. As with scheduling priorities,

locks give disciplined(!) programmers a means to reach deterministic performance measures. But even

discipline is not sufficient to guarantee consistency in large-scale systems, where many developers work

more or less independently on different parts.

Locks are inevitable for task synchronization, but for some common data exchange problems there exist

lock-free solutions (see Section 4.10). The problem with using locks is that they make an application

vulnerable for the priority inversion problem (see Section 4.8). Another problem occurs when the CPU

on which the task holding the lock is running, suddenly fails, or when that task enters a trap and/or

exception (see Section 3.1), because then the lock is not released, or, at best its release is delayed. 44

Chapter 4. IPC: synchronization

4.6.1. Semaphore

The name “semaphore” has its origin in the railroad world, where a it was the (hardware) signal used to

(dis)allow trains to access sections of the track: when the semaphore was lowered, a train could proceed

and enter the track; when entering, the semaphore was raised, preventing other trains from entering;

when the train in the critical section left that section, the semaphore was lowered again.

Edsger Dijkstra introduced the semaphore concept in the context of computing in 1965, [Dijkstra65]. A

semaphore is an integer number (initialized to a positive value), together with a set of function calls to

count and . POSIX names for and are and .

up() down() up() down() sem_wait() sem_signal()

POSIX also introduces the non-blocking functions (set the semaphore) and

sem_post()

(same as but instead of blocking, the state of the semaphore is given in

sem_trywait() sem_wait()

the function’s return value).

A task that executes a blocks if the count is zero or negative. The count is decremented

sem_wait()

when a task executes a ; if this makes the semaphore value non-negative again, the

sem_signal()

semaphore unblocks one of the tasks that were blocking on it.

So, the number of tasks that a semaphore allows to pass without blocking is equal to the positive number

with which it is initialized; the number of blocked tasks is indicated by the absolute value of a negative

value of the semaphore count.

The semaphore must also be created ( ) and deleted

S sem_init(S,initial_count)

( ) somewhere. The is the number of allowed holders of the

initial_count

sem_destroy(S)

semaphore lock. Usually, that number is equal to 1, and the semaphore is called a binary semaphore..

The general case is called a counting semaphore,. Most operating systems offer both, because their

implementations differ only in the initialization of the semaphore’s count.

From an implementation point of view, the minimum data structure of a semaphore has two fields:

struct semaphore {

int count; // keeps the counter of the semaphore.

queue Q; // lists the tasks that are blocked on the semaphore.

}

And (non-atomic!) pseudo code for and (for a binary semaphore)

sem_wait() sem_signal()

basically looks like this (see, for example, of the RTAI code tree for

upscheduler/rtai_sched.c

more detailed code):

semaphore S;

sem_wait(S)

{ if (S.count > 0) then S.count = S.count - 1;

else block the task in S.Q;

} 45

Chapter 4. IPC: synchronization

sem_signal(S)

{ if (S.Q is non-empty) then wakeup a task in S.Q;

else S.count = S.count + 1;

}

So, at each instant in time, a negative indicates the fact that at least one task is blocked on the

S.count

semaphore; the absolute value of gives the number of blocked tasks.

S.count

The semantics of the semaphore as a lock around a critical section is exactly as in its historical railway

inspiration. However, a semaphore can also be used for different synchronization goals: if just

task A

wants to synchronize with , (irrespective of the fact whether or not it needs to exclude

task B task B

from entering a shared piece of code), both tasks can use the and function

sem_wait() sem_signal()

calls.

Here is a pseudo code example of two tasks and that synchronize their mutual job by

task A task B

means of a semaphore:

semaphore S;

task A: task B:

main() main()

{ ... { ...

do_first_part_of_job(); do_something_else_B();

sem_signal(S); sem_wait(S);

do_something_else_A(); do_second_part_of_job();

... ...

} }

Finally, note that a semaphore is a lock for which the normal behaviour of the locking task is to go to

sleep. Hence, this involves the overhead of context switching, so don’t use semaphores for critical

sections that should take only a very short time; in these cases spinlocks are a more appropriate choice

(Section 4.6.3).

4.6.2. Mutex

A mutex (MUTual EXclusion) is often defined as a synonym for a binary semaphore. However, binary

semaphore and mutex have an important semantic distinction: a semaphore can be “signaled” and

“waited for” by any task, while only the task that has taken a mutex is allowed to release it. So, a mutex

has an owner, as soon as it has been taken. This semantics of a mutex corresponds nicely to its envisaged

use as a lock that gives only one task access to a critical section, excluding all others. That is, the task

entering the critical section takes the mutex, and releases it when it exits the critical section. When

another task tries to take the mutex when the first one still holds it, that other task will block. The 46

Chapter 4. IPC: synchronization

operating systems unblocks one waiting task as soon as the first task releases the mutex. This mutually

exclusive access to a section of the code is often also called serialization.

A POSIX mutex, for example, is a (counting) semaphore with priority inheritance implied (see Section

4.9). The basic POSIX API for mutexes is:

pthread_mutex_t lock;

int pthread_mutex_init( // Initialise mutex object:

pthread_mutex_t *mutex,

const pthread_mutexattr_t *mutex_attr

);

// Destroy mutex object.

int pthread_mutex_destroy(pthread_mutex_t *mutex);

// Non blocking mutex lock:

int pthread_mutex_trylock(pthread_mutex_t *mutex);

// Blocking mutex lock:

int pthread_mutex_lock(pthread_mutex_t *mutex);

// Mutex unlock:

int pthread_mutex_unlock(pthread_mutex_t *mutex);

A recursive mutex (or recursive semaphore) is a mutex that can be locked repeatedly by the owner.

Otherwise the thread that holds a mutex and would try to take the mutex again would lock itself, hence

leading to a deadlock. This recursive property is useful for complex mutual exclusion situations, such as

Section 15.2.

in monitors,

The POSIX API requires to indicate explicitly that a mutex should be recursive:

pthread_mutexattr_settype(&mutex, PTHREAD_MUTEX_RECURSIVE);

Some operating systems (e.g., VxWorks) use the recursive mutex mode as the default. Some offer a

so-called a fast mutex: such a mutex is locked and unlocked in the fastest manner possible on the given

operating system (i.e., it doesn’t perform any error checks). A fast mutex can only be locked one single

time by , and all subsequent calls cause the calling thread to block until the

pthread_mutex_lock()

mutex is freed; also the thread that holds the mutex is locked, which causes a deadlock. So, be careful

with using fast mutexes.

Many programmers tend to think that a semaphore is necessarily a more primitive RTOS function than a

mutex. This is not necessarily so, because one can implement a counting semaphore with a mutex and a

condition variable (Section 4.7):

int sem_wait(sem_t *sem)

{ pthread_mutex_lock(&sem->mutex); 47

Chapter 4. IPC: synchronization

while (sem->count == 0) pthread_cond_wait(&sem->cond, &sem->mutex);

sem->count--;

pthread_mutex_unlock(&sem->mutex);

return(0);

}

4.6.3. Spinlocks

A “spinlock” is the appropriate lock mechanism for multi-processor systems, and for use in all kinds of

contexts (kernel call, interrupt service routine, etc.). They are phasing out the use of “hard” exclusion

methods such as and , because: (i) these are too “global”, in the sense that they don’t

cli() sti()

specify the context in which the lock is needed; (ii) it is usually not necessary to disable interrupts in

order to protect two tasks from entering a critical section. However, you can not do all kinds of things

when running inside a critical section locked by a spinlock! For example, do nothing that can take a

“long” time, or that can sleep. Use semaphores or mutexes for this kind of locks.

The task that wants to get a spinlock tries to get a lock that is shared by all processors. If it doesn’t get

the lock, it keeps trying (“busy waiting ”) till it succeeds:

int spinlock(spinlock_t l){

while test_and_set(l) {};

};

So, it’s clear why you shouldn’t do things that take a long time within a spinlock context: another task

could be busy waiting for you all the time! An example of a spinlock in the Linux kernel is the “Big

1.2): the BKL is a recursive spinlock, i.e., it can be locked multiple times

Kernel Lock” (Section

recursively. That means that you (possibly in two separate tasks) can lock it twice in a row, but you also

have to release it twice after that.

Spinlocks come in three versions:

1. and : the classical mutual exclusion version, allowing interrupts to occur

spin_lock spin_unlock

while in the critical section.

2. and : as above, but with interrupts disabled.

spin_lock_irq spin_unlock_irq

3. and : as above, but saving the current state flag

spin_lock_irqsave spin_unlock_irqrestore

of the processor.

All of them work on (the address of) variables of the type One should call

spinlock_t.

before using the lock. The spinlock versions that disable interrupts do not disable

spin_lock_init()

interrupts on the other CPUs than the one the calling task is running on, in order not to bring down the

throughput of the whole multi-processor system. An example (Linux specific!) of the usage (not the

implementation!) of a spinlock with local interrupt disabling is given here:

spinlock_t l = SPIN_LOCK_UNLOCKED;

unsigned long flags 48

Chapter 4. IPC: synchronization

spin_lock_irqsave(&l, flags);

/* critical section ... */

spin_unlock_irqrestore(&l, flags);

So, both the concurrency and the multi-processor issues are dealt with. On a uni-processor system, this

should translate into:

unsigned long flags;

save_flags(flags);

cli();

/* critical section ... */

restore_flags(flags);

Note: the POSIX function has this semantics of disabling interrupts.

pthread_spin_lock()

Spinlocks are a trade-off between (i) disabling all interrupts on all processors (costly, safe, but what you

don’t want to do on a multi-processor system or a pre-emptable kernel), and (ii) wasting time in busy

waiting (which is the only alternative that remains). So, spinlocks work if the programmer is disciplined

enough to use them with care, that is for guaranteed very short critical sections. In principle, the latency

induced by a spinlock is not deterministic, which is in contradiction to its use for real-time. But they

offer a good solution in the case that the scheduling and context switching times generated by the use of

locks, are larger than the time required to execute the critical section the spinlock is guarding.

There is a reason why atomic test-and-set operations are not optimal on multi-processor systems built

from typical PC architecture processors: the test-and-set performed by one processor can make parts of

the caches on the other processors invalid because part of the operation involves writing to memory. And

this cache invalidating lowers the benefits to be expected from caching. But the following

implementation can help a bit:

int spinlock(spinlock_type l){

while test_and_set(l) { // enter wait state if l is 1

while (l == 1) {} // stay in wait state until l becomes 0

};

};

The difference with the previous implementation is that the requires a read and a

test_and_set()

write operation (which has to block memory access for other CPUs), while the test requires only

l == 1

a read, which can be done from cache.

4.6.4. Read/write locks

Often, data has only to be protected against concurrent writing, not concurrent reading. So, many tasks

can get a read lock at the same time for the same critical section, but only one single task can get a write

lock. Before this task gets the write lock, all read locks have to be released. Read locks are often useful to

access complex data structures like linked lists: most tasks only read through the lists to find the element

Section 5.5.)

they are interested in; changes to the list are much less common. (See also 49

Chapter 4. IPC: synchronization

Linux has a reader/writer spinlock (see below), that is used similarly to the standard spinlock, with the

exception of separate reader/writer locking:

rwlock_t rwlock = RW_LOCK_UNLOCKED; // initialize

read_lock(&rwlock);

/* critical section (read only) ... */

read_unlock(&rwlock);

write_lock(&rwlock);

/* critical section (read and write) ... */

write_unlock(&_rwlock);

Similarly, Linux has a read/write semaphore.

4.6.5. Barrier

Sometimes it is necessary to synchronize a lot of threads, i.e., they should wait until all of them have

reached a certain “barrier.” A typical implementation initializes the barrier with a counter equal to the

number of threads, and decrements the counter whenever one of the threads reaches the barrier (and

blocks). Each decrement requires synchronization, so the barrier cost scales linearly in the number of

threads.

POSIX (1003.1-2001) has a function, and a pthread_barrier_t

pthread_barrier_wait()

type. RTAI has something similar to a barrier but somewhat more flexible, which it calls “bits” (see file

in the RTAI source tree), and what some other operating systems call flags or

bits/rtai_bits.c

events. The is a 32 bit value, that tasks can share to encode any kind of AND or OR combination of

bits

binary flags. It can be used as a barrier for a set of tasks, by initializing the bits corresponding to each of

the tasks to “1” and letting each task that reaches the barrier reset its bit to “0”. This is similar to a

semaphore (or rather, an array of semaphores), but it is not “counting”.

4.7. Condition variable for synchronization within mutex

Condition variables have been introduced for two reasons (which amount basically to one single reason):

1. It allows to make a task sleep until a certain application-defined logical criterium is satisfied.

2. It allows to make a task sleep within a critical section. (Unlike a semaphore.) This is in fact the same

reason as above, because the critical section is needed to evaluate the application-defined logical

criterium atomically.

The solution to this problem is well known, and consists of the combination of three things:

1. A mutex lock (see Section 4.6.2).

2. A boolean expression, which represents the above-mentioned logical criterium. 50

Chapter 4. IPC: synchronization

3. A signal (see Section 4.3), that other tasks can fire to wake up the task blocked in the condition

variable, so that it can re-check its boolean expression.

The lock allows to check the boolean expression “atomically” in a critical section, and to wait for the

signal within that critical section. It’s the operating system’s responsibility to release the mutex behind

the back of the task, when it goes to sleep in the wait, and to take it again when the task is woken up by

the signal.

There exists a POSIX standard for condition variables. Here are some of the major prototypes for the

system call, used to make a task wait for its wake-up signal:

pthread_cond_wait()

#include <pthread.h>

// Initialise condition attribute data structure:

int pthread_condattr_init(pthread_condattr_t *attr);

// Destroy condition attribute data structure:

int pthread_condattr_destroy(pthread_condattr_t *attr);

// Initialise conditional variable:

int pthread_cond_init(

pthread_cond_t *cond,

const pthread_condattr_t *cond_attr

); // Destroy conditional variable:

int pthread_cond_destroy(pthread_cond_t *cond);

// Wait for condition variable to be signaled:

int pthread_cond_wait(

pthread_cond_t *cond,

pthread_mutex_t *mutex

); // Wait for condition variable to be signaled or timed-out:

int pthread_cond_timedwait(

pthread_cond_t *cond,

pthread_mutex_t *mutex,

const struct timespec *abstime

); // Restart one specific waiting thread:

int pthread_cond_signal(pthread_cond_t *cond);

// Restart all waiting threads:

int pthread_cond_broadcast(pthread_cond_t *cond);

Others system calls that take the same arguments are: (initialize the data

pthread_cond_init()

structure with which a condition variable is built), (signal the fact that a

pthread_cond_signal()

condition variable has changed state), (signals the state change to all

pthread_cond_broadcast()

tasks that are waiting for the signal, and wakes them all), (wait for the

pthread_cond_timedwait()

signal, or for a timer to expire, whichever comes first). 51

Chapter 4. IPC: synchronization

The of Section 4.6.2 shows a typical application of a condition variable. We repeat the

sem_wait()

code here for convenience:

int sem_wait(sem_t *sem)

{ pthread_mutex_lock(&sem->mutex);

while (sem->count == 0) pthread_cond_wait(&sem->cond, &sem->mutex);

sem->count--;

pthread_mutex_unlock(&sem->mutex);

return(0);

}

The semaphore has a mutex a condition signal and its particular boolean

sem->mutex, sem->cond,

expression, namely its being zero or not. The checking of this condition, as well as the possible

count

decrement of the must be done in a critical section, in order to synchronize access to the

count,

semaphore with other tasks. The function makes the calling task block on the

pthread_cond_wait()

condition variable if the boolean expression evaluates to false. The operating system releases the mutex

when the task must block, so that other tasks can use the semaphore. When the condition is signaled (this

is done by the complementary function , which is not given here but that executes a

sem_signal()

), the calling task is woken up and its mutex is activated (all in one

pthread_cond_broadcast()

atomic operation) such that the woken-up task can safely access the critical section, i.e., check its

boolean expression again. The above-mentioned atomicity is guaranteed by the operating system, which

itself uses some more internal locks in its implementation of the call.

pthread_cond_wait()

It is essential that tasks that wake up from waiting on a condition variable, re-check the boolean

expression for which they were waiting, because nothing guarantees that it is still true at the time of

waking up. Indeed, a task can be scheduled a long time after it was signaled. So, it should also be

prepared to wait again. This leads to the almost inevitable loop around a

while

.

pthread_cond_wait()

The should be the default way to signal the condition variable, and not

pthread_cond_broadcast()

. The latter is only an optimization in the case that one knows for sure that

pthread_cond_signal()

only one waiter must be woken up. However, this optimization violates the loose coupling principle of

14): if the application is changed somewhat, the “optimization” of before

good software design (Chapter

could well become a bottleneck, and solving the situation involves looking for the

calls that can be spread over various files in the application.

pthread_cond_signal()

However, blindly using can also have a negative effect, called the

pthread_cond_broadcast()

“thundering herd” problem: can wake up a large number of tasks, and

pthread_cond_broadcast()

in the case that only one task is needed to process the broadcast, all other woken-up tasks will

immediately go to sleep again. That means the scheduler is hidden under a “herd” of unnecessary

wake-up and sleep calls. So, Linux and other operating systems introduced policies that programmers

can use to give some tasks the priority in wake-ups.

Both semaphores/mutexes and condition variables can be used for synchronization between tasks.

However, they have some basic differences: 52

Chapter 4. IPC: synchronization

1. Signaling a semaphore has always an effect on the semaphore’s internal count. Signaling a condition

variable can sometimes have no effect at all, i.e., when no task is waiting for it.

2. A condition variable can be used to check an arbitrary complex boolean expression.

3. According to the POSIX rationale, a condition variable can be used to make a task wait indefinitely

long, but spinlocks, semaphores and mutexes are meant for shorter waiting periods. The reason is

that is not a cancelling point, while the is.

pthread_mutex_lock() pthread_cond_wait()

4. A condition variable is nothing more than a notification to a task that the condition it was waiting for

might have changed. And the woken-up task should check that condition again before proceeding.

This check-on-wake-up policy is not part of the semaphore primitive.

4.8. Priority inversion

Priority scheduling and locks are, in fact, contradictory OS primitives: priority scheduling wants to run

the highest priority job first, while a mutex excludes every other job (so, also the highest priority job)

from running in a critical section that is already entered by another job. And these contradictory goals

lead to tricky trade-offs. For example, everybody coding multi-tasking systems using priority-based task

scheduling and locking primitives should know about the “priority inversion” danger: in some situations,

the use of a lock prevents a task to proceed because it has to wait for a lower-priority task. The reason is

that a low-priority task (i) is in a critical section for which it holds the lock that blocks the high-priority

task, and (ii) it is itself pre-empted by a medium-priority task that has nothing to do with the critical

section in which the high- and low-priority tasks are involved. Hence, the name “priority inversion”: a

medium-priority job runs while a high-priority task is ready to proceed. The simplest case is depicted in

Figure 4-1. In that Figure, is the high-priority task, the medium-priority task, and

task H task M task

the low-priority task. At time instant enters the critical section it shares with . At

T1,

L task L task H

time blocks on the lock issued by . (Recall that it cannot pre-empt because

T2, task H task L task L

that task has the lock on their common critical section.) At time pre-empts the lower-priority

T3, task M

task , and at the same time also the higher-priority . At time stops, and

T4,

task L task H task M task

gets the chance again to finish the critical section code at time when, at last, can run.

T5

L task H

Figure 4-1. Priority inversion.

The best-known practical case of a priority inversion problem occurred during the Mars

Pathfinder mission in 1997. (More information about this story can be found at

http://www.kohala.com/start/papers.others/pathfinder.html or

http://research.microsoft.com/~mbj/Mars_Pathfinder/Mars_Pathfinder.html.)

4.9. Priority inheritance and priority ceiling

Operating system programmers have tried to “solve” (not prevent) the priority inversion problem, in two

53

Chapter 4. IPC: synchronization

different ways:

Priority inheritance. A low-priority task that holds the lock requested by a high-priority task

• temporarily “inherits” the priority of that high-priority task, from the moment the high-priority task

does the request. That way, the low-priority task will not be pre-empted by medium-level priority

tasks, and will be able to finish its critical section without holding up the high-priority task any longer

than needed. When it releases the lock, its priority drops to its original level, while the high-priority

task will now get the lock. The maximum predictable delay is the length of the critical section of the

low-priority task.

Priority inheritance generates run-time overhead, because the scheduler has to inspect the priorities of

all tasks that access a lock.

Priority ceiling. Every lock gets a priority level corresponding to the priority of the highest-priority

• task that can use the lock. This level is called the ceiling priority. Note that it is the lock that gets a

priority, which it gives to every task that tries the lock. So, when the low-priority task enters the

critical section, it immediately gets the ceiling priority from the lock, such that it will not be

pre-empted by any medium-level priority task. Therefore, another name of the priority ceiling protocol

is instant inheritance.

Priority ceiling generates compile-time overhead, because it can already at that moment check the

priorities of all tasks that will request a lock.

Priority ceiling has the pleasant property that it simplifies implementation and has small run-time

overhead (only the change in priority for the task entering a critical section): the lock never has to be

tested for being free or not, because any task that tries the lock runs at the highest priority to enter the

critical section: any other task that could test the lock would run at the same ceiling priority, and hence

would not have been interrupted in its critical section by the task that currently tests the lock. Indeed,

both tasks live in the same priority level and are scheduled with a policy. Instant

SCHED_FIFO

inheritance also offers a solution to the “deadly embrace” (see Section 4.2) occurring when two tasks

lock nested critical sections in opposite order: the first task to enter the outermost lock will have the

appropriate priority to finish the complete set of nested critical sections.

A possible problem with priority ceiling is that it makes more processes run at higher priorities, for

longer times than necessary. Indeed, the priorities of tasks are changed, irrespective of the fact whether

another task will try to request the lock or not. This reduces the discriminating effects of using

priorities is the first place, and it gives rise to “hidden” priority inversion: while task gets its priority

L

raised to the ceiling priority because it is involved in a lock with another task that has a very high

V

priority, a third task not involved in the lock could get pre-empted by although its priority is

H L

higher and is dormant most of the time.

V

Priority ceiling and inheritance look great at first sight, and they are part of some OS standards: priority

ceiling is in the POSIX standard (POSIX_PRIO_PROTECT), the Real-Time Specification for Java

(RTSJ), OSEK, and the Ada 95 real-time specifications. Priority inheritance is also part of standards such

as POSIX (POSIX_PRIO_INHERIT), and the RTSJ. But priority ceiling and inheritance can still not

54

Chapter 4. IPC: synchronization

guarantee that no inversion or indeterministic delays will occur, [Yodaiken2002], [Locke2002]. Morever,

the priority inheritance “feature” gives rise to code that is more complex to understand and certainly to

predict. Also, determining a priori the ceiling priority for a lock is not an easy matter (the compiler must

have access to all code that can possibly use a lock!), and can cause portability and extendability

headaches.

Priority inversion is always a result of a bad design, so it’s much better to prevent race conditions instead

of “solving” them. However, contrary to the deadlock prevention algorithm (Section 4.2), no similarly

simple and guaranteed algorithm for priority inversion is known. So, all an operating system could do to

help the programmers is signalling when priority inversion takes place, such that they can improve their

design.

Most RTOSes don’t apply priority inversion solutions for every case of sender-receiver synchronization.

For example Neutrino (from QNX) uses separate synchronization mechanisms for critical sections

(semaphores) and sender-receiver (which is synchronous IPC in QNX). It solves priority inversion only

so long as applications use a many-to-one IPC model. As soon as an application uses many-to-many

IPC (via a POSIX queue) there is no more prevention of priority inversion. Many-to-many is inherently

difficult because the kernel has no way to know which receiver might be ready next, so all it could do

would be to raise the priority of all potential listeners (and the processes upon which they are waiting).

And this would often result in a logjam as every process was raised to the same priority, invalidating

exactly the major reason why priorities were introduced in the first place.

4.10. Lock-free synchronization for data exchange

Some synchronization can also be done without locks, and hence this is much more efficient and

guaranteed to be deadlock-free, [Herlihy91], [Herlihy93]. Lock-free synchronization uses the

(see Section 4.5), or similar constructs. This functionality is

compare_and_swap(address,old,new)

applicable to the manipulation of pointers, e.g., to interchange two buffers in one atomic operation, or to

do linked list, queue or stack operations.

The following code fragment shows the basic form of this pointer swinging:

ptr = ...

do {

old = ptr;

new = new_value_for_pointer;

while ( !compare_and_swap(ptr,old,new) );

If the returns “false”, the swinging of the pointers should not be done, because

compare_and_swap()

some other task has done something with the pointer in the meantime.

Recall the possible problem with : it only compares the values of the addresses,

compare_and_swap()

not whether this value has been changed! This means that a double change of the pointer (back to its

original value) will not be detected. This occurs quite frequently, i.e., any time when memory space is

re-used, e.g., in a stack or a linked list. 55

Chapter 4. IPC: synchronization

Another problem of using for lock-free synchronization is that it is not always the

compare_and_swap

most efficient method available, because it involves the copying of a complete data structure before that

data structure can be updated without a lock. 56

Chapter 5. IPC: Data exchange

The previous Chapter looked at the synchronization aspect of IPC; this Chapter deals with the

mechanisms and policies of data exchange. The emphasis is on data exchange for real-time systems.

The mechanism of all data exchange IPC is quite similar: the operating system has some memory space

reserved for the data that has to be exchanged, and uses some sychronization IPC primitives for reading

or writing to that memory space. There is some “object” responsible for the memory and the locks; we

call this object the mediator, (Section 14.3) or the channel. The mediator is really the heart of the data

exchange: the IPC clients make function calls on it, but it’s the mediator that takes care of memory

allocation, buffering, locking, signalling, etc. Although a mediator is more of an object-oriented design

concept, it is there already in most of the old C code of operating systems. The bad news is that most

operating system designers didn’t realize that, and hence, they didn’t reuse the mediator code when

implementing the myriads of IPC forms they developed. . .

It’s especially in their policy (i.e., the choice of how the low-level mechanism is being used) that the

different forms of data exchange differ from each other. Below is a non-exhaustive list of policies for

data exchange. Almost every possible combination of options is feasible, so it should come as no surprise

that operating systems tend to have data exchange primitives with not quite the same API. . .

(No) Data loss. Whether or not everything that the “sender” sends to the “receiver” (or rather, to the

• IPC mediator object) will indeed be received by the “receiver”.

(Non)Blocking. (Also called (a)synchronous.) The “sender” and/or “receiver” block until the

• exchange is finished.

One-to-many/many-to-one/many-to-many/one-to-one. There is one single “sender” and multiple

• “receivers”. Or any variation on this theme.

Named/anonymous. The “sender” must explicitly give the identification of the “receivers”, or it must

• only name the mediator.

(Non)Buffered. The “sender” sends some data to the “receiver”, but only indirectly: the data is stored

• in a buffer, from which the “receiver” reads at its own leisure.

(Non)Prioritized. A message can get a priority, and the highest priority message gets delivered first.

• Similarly, the tasks that wait for a data exchange (hence, which are blocked by the lock of the

mediator), can be woken up according to their static priority.

There does exist some standardization in data exchange IPC: POSIX has its standard 1003.1b, in which it

specifies message queues, with 32 priorities and priority-based task queues on their locks.

5.1. Shared memory

Two (or more) tasks can exchange information by reading and writing the same area in memory. The 57

Chapter 5. IPC: Data exchange

main advantage is that the data exchange can take place with zero copying, because the “buffer” is just

one deep. For the rest, any policy can be implemented on top of shared memory. One area where shared

memory is very popular is for the data exchange with peripheral hardware, if possible under Direct

Memory Access (DMA).

Avalaible RAM memory is the only limit to the number of independent shared memory IPC “channels”.

The shared memory must be reserved from the operating system, and locked into RAM. (See Section

6.1.) If the tasks involved in the shared memory IPC want to know how “fresh” the data in the shared

segment is, they have to implement their own handshake protocol themselves, because the operating

system gives no indication as to whether data from shared memory has already been accessed or not. One

common approach is to put a counter in the shared memory data structure, that indicates how many writes

have already taken place. This counter could be a time stamp, which is, in addition, particularly useful

for an asynchronous monitor task in user space: that task, for example, plots the data from the real-time

task at its own rate, and the time stamps provide a way to keep the data plotted on a correct time line.

5.2. FIFOs

Shared memory has the properties of a so-called block device: programs can access arbitrary blocks on

the device, in any sequence. Character devices, on the other hand, can access the data only in a specified

linear sequence. A FIFO (First-in, First-Out) is such a character device IPC: its mediator policy is

loss-free, non-blocking (unless the FIFO is empty or full), in principle many-to-many but in practice

often 1-to-1 (i.e., only one sender and one receiver), and buffered (i.e., FIFOs put data in a pipeline,

where the sender adds data on one end, and the reader reads it at the other end).

Some FIFO implementations support blocking for synchronization at the reader’s end, so the reader gets

woken up as soon as new data has arrived. FIFOs that implement blocking have a “task queue” data

structure in which blocked tasks can wait.

FIFO’s often also allow asynchronous data exchange: a task can register a FIFO handler that the

operating system executes after data has been put into the FIFO. This is done with exactly the same

ISR-DSR principle as for hardware and software interrupts (Section 3.4): the writing of data into the

FIFO also fires an event that activates the FIFO handler; this handler will be a tasklet (Section 3.4), that,

just as in the case of an interrupt, is executed by the operating system before it does its next scheduling.

The boundaries between successive data in the FIFO need not necessarily be sharp, because different

blocks might have different sizes. However, in that case, one speaks more often about mailboxes, see

Section 5.3.

The mediator implementing the FIFO uses a lock for mutual exclusion during read or write, and in order

to keep the FIFO’s task queue data structures consistent. However, if the FIFO runs between a real-time

task and a user space task, the locking problem is very much simplified: the real-time task can never be

interrupted by the user task (because it runs in kernel space) so no lock is needed at the real-time side. 58

Chapter 5. IPC: Data exchange

5.3. Messages and mailboxes

Messages and mailboxes allow the sender to send data in arbitrary chunks, only limited by available

memory in the buffer. The message contains some “meta-information” about the size and sender of the

message; or whatever data that the message protocol prescribes. This meta-information is, in fact, the

only practical difference with FIFOs. From an implementation point of view, FIFOs, messages and

mailboxes all look very similar, in the sense that there is the mediator object that takes care of the buffer,

the locks and the queues of waiting tasks. Moreover, many operating systems make no distinction

between messages and mailboxes. If they do make a distinction, it is the following:

Message. The sender puts its message in a memory space it has allocated itself, and then sends the

• address of that memory to the OS, together with an identification of the receiver. The receiver asks the

OS whether there are messages for it, and decides to read them or not. Reading a message is done in

the same place as where it was written. If desired, a counter on the message data allows for 1-to-many

IPC.

Be careful with this oversimplified description: passing the address of a data chunk is error-prone

(multi-processor systems; virtual memory; . . . ).

Mailbox. The sender notifies the OS that it has a message, and gives the identification of the receiver.

• The OS then copies the message to the mailbox of the receiver; this mailbox is a buffer managed by

the operating system. The receiver task reads the messages in the order of arrival. (Of course,

variations on this policy exist.)

A natural extension to messages or mailboxes is synchronous message passing , sometimes also called

Send/Receive/Reply (because that’s what it is called in QNX): the “sender” sends a message, and waits

until the “receiver” has acknowledged the reception of the message. Hence, two messages are exchanged

in this form of IPC. The SIMPL (http://www.holoweb.net/~simpl/) (Synchronous Interprocess Messaging

Project for Linux) project offers a free software implementation of this form of message passing.

POSIX has standardized an API for messages (with the semantics of what was called “mailboxes” above,

i.e., the message queues are managed by the operating system). Here are the basic data structures and

prototypes:

struct mq_attr {

long mq_maxmsg; // Maximum number of messages in queue

long mq_msgsize; // Maximum size of a message (in bytes)

long mq_flags; // Blocking/Non-blocking behaviour specifier

// not used in mq_open only relevant

// for mq_getattrs and mq_setattrs

long mq_curmsgs; // Number of messages currently in queue

}; // Create and/or open a message queue:

mqd_t mq_open(

char *mq_name,

int oflags,

mode_t permissions, 59

Chapter 5. IPC: Data exchange

struct mq_attr *mq_attr

); // Receive a message:

size_t mq_receive(

mqd_t mq,

char *msg_buffer,

size_t buflen,

unsigned int *msgprio

); // Send a message to a queue

int mq_send(

mqd_t mq,

const char *msg,

size_t msglen,

unsigned int msgprio

); // Close a message queue:

int mq_close(mqd_t mq);

// Get the attributes of a message queue:

int mq_setattr(mqd_t mq, const struct mq_attr *new_attrs,

struct mq_attr *old_attrs);

// Register a request to be notified whenever a message

// arrives on an empty queue:

int mq_notify(mqd_t mq, const struct sigevent *notification);

// Destroy a message queue:

int mq_unlink(char *mq_name);

5.4. Circular buffers

A circular (or ring) buffer has most of the properties of shared memory, except that (i) its depth is larger

than one (i.e., it can contain more than one of the data structures exchanged in the communication). The

buffer is usually implemented as an array of communication data structures, and the positions of sender

and receiver are indicated by pointers in this array. When one of these pointers reaches the end of the

buffer, it swaps back to the start of the buffer and continues from there. So, data is lost when the sender

pointer overtakes the reader pointer; data is read multiple times if the reader pointer overtakes the writer

pointer. It’s straightforward to use a lock to avoid these situations. In that case, the lock makes the buffer

blocking. A lock can also be set on each data item in the buffer, in order to avoid concurrent access of the

same data.

Two common options for buffers (especially in real-time applications) are: 60

Chapter 5. IPC: Data exchange

Locking in memory. The memory used for the buffer should not be swapped out of the physical RAM.

• “Buffer Half Full” (High water/Low water) interrupt. The sender and/or receiver tasks can raise an

• event if the buffer is more than half full or half empty. This event must wake up the other part of the

IPC, such that it can take the appropriate actions to prevent the buffer from overflowing or getting

empty.

5.5. Swinging buffers

A swinging buffer (or “flip buffer”) is two things:

An advanced circular buffer. Instead of using one single shared memory array, a swinging buffer uses

• two or more. The sender fills up one of the buffers, while the receiver empties another one. Every time

one of the tasks reaches the end of its buffer, it starts operating on a buffer that the other task is not

using.

A deadlock-free “lock.” Both tasks operate on different data structures, hence no locks are used to

• access the data. Only when the tasks must decide which buffer to use, they use a lock on the buffer

pointers, or the corresponding atomic pointer switching operation (see Section 4.10). In this latter

case, the “lock” is atomic in hardware, and hence cannot cause any of the problems generated by

software locks.

So, a swinging buffer is non-blocking but loss-prone, because one task can fill or empty the same buffer

of the swinging buffer pair multiple times before the other task is ready to switch buffers.

The swinging buffer approach is also known under the name of read-copy-update (RCU

(http://lse.sourceforge.net/locking/rcu/rcupdate_doc.html) ). It can be used as an alternative for

read-write locks (Section 4.6.4) for “frequent reads/infrequent writes” applications: the readers follow a

pointer, and need no locks, while the (less frequent) writer swaps the pointers after having filled in the

new data structure.

5.6. Remote Procedure Calls

The previous flavours of IPC can all be catalogued as “low-level”: they are implemented with very basic

OS primitives, and are usually shielded from the users within system calls. One of the popular IPC

mechanisms at the user level are Remote Procedure Calls (RPC). With RPC, a user can invoke the

execution of a task on a remote computer, as if that task ran on the processor of the calling task. RPC is

implemented on top of messages, with a synchronizing hand-shake protocol. Obviously, RPC is not very

real-time, but could be useful for embedded systems.

On the other hand, RPC is the simplest form of what is also called distributed or embedded components:

software objects (“agents”) that can live on any computer on a network, and that tasks can access

transparently. There are three major standards in the area of distributed components: 61

Chapter 5. IPC: Data exchange

CORBA (http://www.corba.org) (Common Object Request Broker Architecture). This is a fully

• platform and vendor independent initiative.

DCOM, controlled by Microsoft.

• RMI (Remote Method of Invocation), from the Java world.

A real-time extension to CORBA has been specified in 2001. It takes care of the specification of the

determinism required for real-time applications, but needs to map its functionality onto primitives

offered on the host operating system. Of course, the absolute time scales of CORBA real-time are longer

than those of a stand-alone computer system.

(TODO: more details about CORBA, and real-time CORBA.) 62

Chapter 6. Memory management

This Chapter explains what memory management means, and how it influences the real-time behaviour

of an operating system. Non real-time aspects of memory management (virtual memory, swapping, dirty

pages management, etc.) are outside the scope of this document.

6.1. Terminology

All tasks need RAM memory to execute. Not only for placing their data, but also for their code and for

IPC with other tasks. A computer system offers a (most often) contiguous space of physical RAM, and

the MMU (Memory Management Unit) of the hardware, and the Virtual Memory software of the

operating system, help to give a task the impression that it is the only one that uses the memory. And that

that memory is (i) larger than the physically available RAM; (ii) distributed (transparantly to the task)

over a number of physically non-contiguous memory pages of fixed size; and (iii) protected from access

by other tasks.

But these general-purpose OS requirements are not those of real-time systems, or of embedded systems

on processors without MMU. Their concerns are:

Fast and deterministic memory management. The fastest and most deterministic approach to memory

• management is no memory management at all. This means that the programmers have all physical

RAM available as one contiguous block that they can use as they like. This approach is usually only an

option for small embedded systems that run a fixed and small number of tasks. Other RTOSs and EOSs

offer at least the basic memory management: memory allocation and deletion through system calls.

Page locking. Demand paging is the common approach in general purpose operating systems to

• distribute the scarce physical RAM over all tasks: each task gets a number of pages in RAM, and the

pages it hasn’t accessed recently are “swapped out” to make room for pages of other tasks. This

swapping is a non-deterministic thing, because it needs access to disk, and most disk controllers have

non-deterministic buffering for optimising the average throughput to or from the disk: when the task

needs code or data from one of its pages that is currently swapped out, the page has to be retrieved

from disk, and often another page in RAM has first to be swapped out to disk. Hence, the MMU of an

RTOS must lock the pages of real-time tasks in the physical RAM, in order to avoid the paging

overhead. POSIX provides the and function calls to do this locking.

mlock() mlockall()

Page locking is a Quality of Service feature of the operating system: it guarantees that tasks have a

specified amount of the memory resource at their disposal. In this respect, it is similar to the QoS

extensions of the scheduler (see Section 1.3.3).

Dynamic allocation. A task’s memory needs can change during its lifetime, such that it should be able

• to ask the operating system for more memory. The Linux system call for this purpose is

. (In kernel space!) A real-time memory manager can only make this dynamic allocation

vmalloc()

of memory deterministic, if the memory pages can be got from a pool of free pages locked in the

physical RAM. Anyway, dynamic allocation should be used very carefully in any real-time task, 63

Chapter 6. Memory management

because there is no guarantee that the memory pool has enough free pages left to satisfy all requests.

This implies, for example, that IPC approaches with dynamic memory allocation needs (such as

Section 5.3) are to be avoided.

unlimited mailboxes and messages, see

Nothing prevents an operating system from allocating memory in smaller blocks than one page.

However, finer and variable-sized granularity implies more complex memory management, memory

fragmentation, and hence less determinism.

Memory mapping. Real-time and embedded systems typically have to access peripheral devices. The

• on-board registers in which these devices place their data have to be mapped somewhere into the

address space of the corresponding device driver task. The POSIX system call to do this mapping is

. Typically, this mapping is a configuration activity, and hence need not be done in real-time.

mmap()

Memory sharing. One of the most efficient ways for tasks to communicate is through shared memory

• (see Section 6.2). The operating system has two major responsibilities in this area: (i) (de)allocation of

the shared memory, and (ii) synchronizing access to that memory by different tasks. The latter topic is

discussed in Chapter 4; the former is illustrated later in this Chapter.

RAM disks. In order to avoid the non-deterministic overhead of accessing hard disks (for real-time

• systems) or the extra cost, extra space, and reduced robustness of mechanical disk devices (for

embedded systems), part of the available RAM can used to emulate a hard disk. This means that that

memory is organized and accessed as a file system, as if it would reside on a hard disk.

When the RAM disk should be able to preserve data when the power is switched off, the embedded

system designer implements it in the form of a flash disk. This is memory that can be “burned” many

thousand times, rather quickly, with very little power, and from within the system code itself.

Reburning (“flashing”) is required either for reprogramming the device, or for temporary storage of

“non-volatile” data.

Having the system code in a file system on flash gives the added bonus that the code need not be

loaded into RAM, but may be executed in place. This results in shorter start-up times.

Stripped libraries. RAM is a scarce resource in real-time and embedded systems, such that

• programmers try to use as little of it as possible. Hence, they often use “stripped down” versions of

µlibc

general utility libraries (C library, math library, GUI library, etc.). is such a low-footprint version

of the C library.

6.2. Shared memory in Linux

(TODO: update state of affairs on shared memory! POSIX API for shared memory; sharing between

real-time and user space; shared memory managment through locks and/or monitor; ,

copy_to_user()

.)

copy_from_user() 64

Chapter 6. Memory management

This Section discusses two complementary ways to allocate shared memory in Linux. There is nothing

particularly real-time about using shared memory; allocating shared memory, however, is more

controversial: RTLinux doesn’t allow to allocate memory on-line, RTAI does.

The shared-memory pool is a block of physical memory set aside at boot time so that Linux does not use

it for processes. To set up the pool, you first determine how much physical memory the system has and

how much is to be used for shared memory. Normal Linux processes are required to map physical

memory into their private address space to access it. To do this, the Linux processes calls on the

open()

memory device . After the file descriptor is opened, the Linux process maps the shared

/dev/mem

memory into its address space using , which returns a pointer to the shared memory as mapped

mmap()

into the Linux process’s address space. Once the shared memory is mapped, it may be accessed by

dereferencing the pointer. When the process terminates, you use to unmap the shared memory

munmap()

by passing the pointer and the size of its object. Shared-memory access is easier in the kernel space of

the real-time Linux variants, since the real-time code executes in kernel space and thus is not required to

map physical addresses to virtual addresses.

6.2.1. Allocation at boot time

In this approach, a block of shared memory can be reserved at boot time, to prevent Linux from using it

for general purposes. The reservation is done using the parameter in LILO (or something

append=

similar for other bootloaders). Here is an example of a file that reserves 1 Megabyte

/etc/lilo.conf

for shared memory out of 16 available Megabytes:

image=/boot/zImage

label=rtlinux

root=/dev/hda1

append="mem=15m"

Linux will use only the first 15 Megabytes, and the last Megabyte can be used for shared memory

purposes. The base address of the shared memory in the above-mentioned example is:

#define BASE_ADDRESS (15 * 0x100000)

The real-time and user Linux tasks use different ways to access the memory.

A real-time task runs in kernel space, and hence can directly access the memory with its physical

address. For example, a data structure of type at the start of the shared memory is accessed as:

my_data

my_data *ptr;

ptr = (my_data *) BASE_ADDRESS;

ptr->... = ...

A user space tasks must use its virtual address. This mapping of physical memory into the virtual

address space consists of two steps: 65

Chapter 6. Memory management

The user space task must “open” the memory, by using the system call on the device

open()

• :

/dev/mem

#include <unistd.h> // POSIX defined open()

#include <fcntl.h> // O_RDWR for read and write access

// or O_RDONLY for read-only access, etc.

int fd; // file descriptor for the opened memory

if ((fd = open("/dev/mem", O_RDWR)) < 0 )) {

// handle possible error here

}

The system call then does the actual mapping.

mmap()

• my_data *ptr;

ptr = (my_data *) mmap (0, sizeof(my_data),

PROT_READ | PROT_WRITE,

MAP_SHARED,

fd, BASE_ADDRESS);

The parameters and are POSIX-defined and indicate read and write

PROT_READ PROT_WRITE

access; indicates that memory can be shared with any other task that maps it too. (See

MAP_SHARED

the man pages for more details.)

The shared memory is then accessed via the pointer, as in the kernel space task.

my_data *ptr;

ptr->... = ...

The task must use to un-map the shared memory used for the data structure:

my_data

munmap()

my_data *ptr;

munmap(ptr, sizeof(my_data));

6.2.2. Allocation in kernel space

The mbuff module implements the device. This device offers shared memory (allocated in

/dev/mbuff

the kernel using the ) in kernel as well as in user space. The shared memory does not need to be

vmalloc

reserved at the system startup and its size is not limited by memory fragmentation. It is logically (but not

physically) contiguous, and is locked in the physical RAM. When you allocate a block, the kernel first

grabs the free pages, then if there is not enough of them, starts freeing more, by reducing buffers, disk

cache and finally by swapping out to disk some user data and code. For sure this is not a real-time

operation—it may take seconds to get something like 100 MB out of 128 MB RAM machine. 66

Chapter 6. Memory management

6.2.3. Allocation in module

(TODO: latest kernel options for memory allocation; dmaBuffer module. Use this approach preferably at

boot time, otherwise you might not be able to find all the requested memory as a contiguous area in

RAM.)

(TODO: .)

copy_from_user() 67

Chapter 7. Real-time device drivers

An operating system must interface its peripheral devices to its kernel software as well as to the user

application software. This should be done in a modular and systematic way, such that all hardware

“looks the same” to software applications. The software that takes care of this hardware-independent

interfacing are device drivers. For the Linux real-time variants, Comedi (Section 7.4) is a successful and

steadily growing project for real-time and non real-time device drivers for digital acquisition cards.

In the UNIX world, device drivers are visible through the “files” (where stands for a

/dev/xyz xyz

particular device, such as, for example, for the first hard disk, for the first serial line, etc.).

hda ttyS0

The 2.4.X kernels have introduced the and driverfs (“driver file system”) approaches, which give

devfs

more structure to the information about the devices that have actually been loaded. But all these things

are for user space, and hence not relevant for the real-time Linux variants that operate in kernel space.

The bookkeeping aspects of registering a device driver, with major and minor numbers, as well as

guidelines for writing device drivers, are explained in detail in the UNIX literature. For the Linux

example, Rubini’s Linux Device Drivers book ([Rubini2001]) is the major reference.

7.1. Mechanism and policy

A major feature of a good device driver is that it “provides mechanism, not policy.” This means that it

should faithfully mimic all the interfacing capabilities of the device (the “mechanism”), but nothing

more. It should not try to interpret the exchanged data in any possible user context (the “policy”),

because that is the job of that user application program itself. Indeed, once a device driver offers a

software interface to the mechanism of the device, an application writer can use this mechanism interface

to use the device in one particular way. That is, some of the data stuctures offered by the mechanism are

interpreted in specific physical units, or some of them are taken together because this composition is

relevant for the application. For example, a analog output card can be used to generate voltages that are

the inputs for the electronic drivers of the motors of a robot; these voltages can be interpreted as setpoints

for the desired velocity of these motors, and six of them are taken together to steer one particular robot

with six-degrees of freedom. Some of the other outputs of the same physical device can be used by

another application program, for example to generate a sine wave that drives a vibration shaker. Or, the

robot control program can use a force sensor that is interfaced through a serial line. The force sensor

device driver “talks” to both the application program (i.e., the force control algorithm), and the serial line

device driver (for which it is a “user application” itself!). It is obvious that the serial line driver should

never implement function calls that are only useful in the force sensor driver context. Nevertheless, that’s

exactly what happens in many projects with constrained scope, vision and time. . .

As for the other operating system responsibilities discussed in the previous Chapters, writing device

drivers for an RTOS or an EOS is not so much different from writing them for a general-purpose OS.

Basically, in an RTOS context, one should make sure that all timing delays in the drivers are both short

and deterministic, and every DSR should be an appropriately prioritized thread or handler that waits on

an event to become active. 68

Chapter 7. Real-time device drivers

7.2. Device drivers in UNIX

In the UNIX philosophy, all devices are considered as being “files”, and hence, their device drivers share

the following functions: , , , , , . The

open() close() read() write() read_config() set_config()

function call names are operating system independent, and just for demonstration. However, ,

open()

, and , are POSIX compliant. The configuration function calls are, in UNIX

close() read() write()

often taken together in the function.

ioctl()

makes the device accessible for programs, while ends the accessibility. The device can

open() close()

be opened in different modes, such as, for example, (“read-only”)

O_RDONLY O_WRONLY,

(“write-only”), (“read and write”), and (“non-blocking”).

O_RDWR O_NONBLOCK

and interchange data between the peripheral device and the (kernel or application)

read() write()

software: a known datastructure is copied from one place in memory to another. Of course, the exact

contents of that data structure depends on the device and/or on the particular use of this device by the

programmer. reads out the device’s current configuration status, and programs that

read_config() set_config()

configuration. Configuration management often has less strict timing constraints than reading and

writing. It also has less standardized function calls, because of the larger variety in possible settings of

different hardware. Nevertheless, the POSIX standard prescribes the use of the function call,

ioctl()

for all configuration actions that don’t fit cleanly in the classic UNIX stream I/O model of ,

open()

, , and :

close() read() write()

int ioctl(int d, int request, ...)

The parameter is the “file descriptor” with which the device has been opened; is the

d request

particular configuration identifier; and are possible arguments that come with the

... request.

7.3. Complex device drivers

A simple device driver need nothing more than writing and/or reading of some hardware registers on a

peripheral device. Some devices interact with the software through hardware interrupts. Hence, their

Section 3.4). Recall that only a subset

device drivers must include an ISR, and possibly also a DSR (see

of all kernel space functions are available in the run-time context of an ISR. And a real-time device

driver is subjected to even more constraints.

Almost all devices can be interfaced in Programmed Input/Output (PIO) mode: the processor is

responsible for accessing bus addresses allocated to the device, and to read or write data. Some devices

also allow shared memory, or even Direct Memory Access (DMA): the device and the memory exchange

data amongst each other directly, without needing the processor. DMA is a feature of the bus, not of the

operating system; the operating system, however, must support its processes to use the feature, i.e.,

provide a system call to initialize DMA transfer, and a handler to react to the notification of the device 69

Chapter 7. Real-time device drivers

that it has finished its DMA. Anyway, support for shared memory and DMA makes a device driver again

a bit more complex.

From the point of view of system developers, it is worthwhile, in the case of complex devices or systems

with lots of devices, to standardize the structure and the API for the device drivers as much as possible:

API: devices that offer similar mechanism, should have the same software interface, and their

• differences should be coped with by parameterizing the interfaces, not by changing the interface for

each new device in the family.

Structure: many electronic interfaces have more than one layer of functionality between the hardware

• and the operating system, and the device driver code should reflect this fact. For example, many

different interface cards use the same PCI driver chips, or use the parallel port to connect to the

hardware device. Hence, providing “low-level” device drivers for these PCI chips and parallel ports

allows for an increased modularity and re-useability of the software. And the mechanism of the

low-level drivers is used with different policies in the various higher-level drivers.

7.4. Comedi

David Schleef started the Comedi (http://stm.lbl.gov/comedi/) project to interface lots of different cards

for measurement and control purposes. This type of cards are often called Data Acquisition cards, or

DAQ cards. Schleef designed a structure which is a balance between modularity (i.e., it’s fairly easy to

integrate a new card because most of the infrastructure part of the driver can be reused) and complexity

(i.e., the structure doesn’t present so much overhead that new contributors are scared away from writing

their new drivers within the Comedi framework).

Comedi works with a standard Linux kernel, but also with its real-time extensions RTAI and RTLinux.

The Comedi project consists of two packages, and three parts: the “comedi” package contains the

drivers, and the kcomedilib kernel module for Linux (which is an library to use the drivers in real-time);

the “comedilib” package implements the user space access to the device driver functionality.

The cards supported in Comedi have one or more of the following features: analog input channels,

analog output channels, digital input channels, and digital output channels. The digital channels are

conceptually quite simple, and don’t need much configuration: the number of channels, their addresses

on the bus, and their direction (input/output).

The analog channels are a bit more complicated. Typically, an analog channel can be programmed to

generate or read a voltage between a lower and an upper threshold (e.g., -10V and +10V); the card’s

electronics can be programmed to automatically sample a set of channels, in a prescribed order; to buffer

sequences of data on the board; or to use DMA to dump the data in an available part of memory, without

intervention from the processor. 70

Chapter 7. Real-time device drivers

Many interface cards have extra functionality, besides the analog and digital channels. For example, an

EEPROM for configuration and board parameters, calibration inputs, counters and timers, encoders (=

quadrature counter on two channels), etc. Therefore, Comedi offers more than just analog and digital

data acquisition.

The kernel space structures that Comedi uses have the following hierarchy:

channel: the lowest-level component, that represents the properties of one single data channel (analog

• in or out; digital in or out). Each channel has parameters for: the voltage range, the reference voltage,

the channel polarity (unipolar, bipolar), a conversion factor between voltages and physical units.

sub-device: a set of functionally identical channels that are physically implemented on the same (chip

• on an) interface card. For example, a set of 16 identical analog outputs. Each sub-device has

parameters for: the number of channels, and the type of the channels.

device: a set of sub-devices that are physically implemented on the same interface card; in other

• words, the interface card itself. For example, the National Instruments 6024E device has a sub-device

with 16 analog input channels, another sub-device with two analog output channels, and a third

sub-device with eight digital inputs/outputs. Each device has parameters for: the device identification

tag from the manufacturer, the identification tag given by the operating system (in order to

discriminate between multiple interface cards of the same type), the number of sub-devices, etc.

The basic functionalities offered by Comedi are:

instruction: to synchronously perform one single data acquisition on a specified channel, or to perform

• a configuration on the channel. “Synchronous” means that the calling process blocks until the data

acquisition has finished.

scan: repeated instructions on a number of different channels, with a programmed sequence and

• timing.

command: start or stop an autonomous (and hence asynchronous) data acquisition (i.e., a number of

• scans) on a specified set of channels. “Autonomous” means: without interaction from the software,

i.e., by means of on-board timers or possibly external triggers.

This command functionality is not offered by all DAQ cards. When using RTAI or RTLinux, the

command functionality is emulated through the virtual driver. The command

comedi_rt_timer

functionality is very configurable, with respect to the choice of events with which to signal the progress

of the programmed scans: external triggers, end of instruction, etc.

Comedi not only offers the API to access the functionality of the cards, but also to query the capabilities

of the installed Comedi devices. That is, a user process can find out on-line what channels are available,

and what their physical parameters are (range, direction of input/output, etc.).

Comedi contains more than just procedural function calls: it also offers event-driven functionality. The

data acquisition can signal its completion by means of an interrupt or a callback function call. Callbacks

are also used to signal errors during the data acquisition or when writing to buffers, or at the end of a 71

Chapter 7. Real-time device drivers

scan or acquisition that has been launched previously to take place asynchronously (i.e., the card fills up

som shared memory buffer autonomously, and only warns the user program after it has finished).

The mechanisms for synchronization and interrupt handling are a bit different when used in a real-time

context (i.e., with either RTAI or RTLinux), but both are encapsulated behind the same Comedi calls.

Because multiple devices can all be active at the same time, Comedi provides (non-SMP!) locking

primitives to ensure atomic operations on critical sections of the code or data structures.

Finally, Comedi offers the above-mentioned “high-level” interaction, i.e., at the level of user space device

drivers, through file operations on entries in the directory (for access to the device’s functionality),

/dev

or interactively from the command line through the “files” in the directory (which allow to inspect

/proc

the status of a Comedi device). This high-level interface resides in the “comedilib” tarball, which is the

user space library, with facilities to connect to the kernel space drivers residing in the “comedi” tarball.

7.4.1. Writing a Comedi device driver

(TODO: describe series of steps.)

7.5. Real-time serial line

A real-time device driver for the serial lines is integrated into RTAI. There used to be an independent

project, rt_com (http://rt-com.sourceforge.net/), but the developers joined the RTAI bandwagon, and the

code was thoroughly rewritten. (Under supervision of the Comedi maintainer, David Schleef.)

The RTAI device driver resides in the “SPdrv” (Serial Port driver) sub-directory of RTAI. It provides

very configurable address initialization, interrupt handling, buffering, callbacks, and non-intrusive buffer

inspection. It’s a nice purely “mechanism” device driver.

7.6. Real-time parallel port

A real-time device driver for the parallel port is integrated into Comedi. It’s not much different from a

user space driver, except for the real-time interrupt handler that can be connected to the interrupt that can

be generated on pin 10 of the parallel port. The driver does not support ECP/EPP parallel ports.

7.7. Real-time networking

The rtnet project used to be stand-alone, but is now also integrated into RTAI. It provides a common

programming interface (real-time and user space) between the RTOS and the device drivers of ethernet

72

Chapter 7. Real-time device drivers

cards. Of course, TCP is not supported, due to its inherently non-realtime specifications; UDP is

supported.

Although about every possible ethernet card has a Linux driver, these cannot be used unchanged for hard

real-time, because their interrupt handling is not real-time safe. Only a couple of the most popular cards

are supported, and there is not much interest from the community to port more drivers.

The CAN bus is a two-wire bus with a 1Mbits/sec maximum transmission rate, that has become very

popular in many industries, such as the automotive. It can be used for real-time, thanks to its

CSMA/CD-NDBA bus arbitration protocol. CSMA/CD-NDBA stands for Carrier Sense Multiple Access

with Collision Detect—Non-Destructive Bit Arbitration. CSMA is also used for ethernet: all clients of

the bus sense what is happening on the bus, and stop transmitting when they sense a collision of

messages from different clients. The CAN bus adds, in hardware, the NDBA part: this guarantees that the

bit sent on the bus is not destroyed in a collision. In CAN the dominant bit is the logical “0”, and it

overrides the recessive bit (logical “1”). So the client that sends a dominant bit will see this dominant bit

on the bus, and can continue sending. Each client on the CAN bus has a unique and statically defined

identifier of 11 bits wide (29 bits in the extended version of the standard), that corresponds to its priority.

That means that the client with the most dominant bits early on in its identifier will be the one that

survives the NDBA the longest, and hence it is the one that gets the bus first. So, the CAN bus

implements “priority-based scheduling” of its clients. Due to the hardware limitations that must

guarantee the above-mentioned procedure of surviving dominant bits, a CAN bus has a maximum length

of about 100 meters.

The Real-time Transport Protocol (RTP) (http://www.cs.columbia.edu/~hgs/rtp/) (RFC 1889

(ftp://ftp.isi.edu/in-notes/rfc1889.txt)) and the Real-Time Publish-Subscribe protocol (drafted by

Real-Time Innovations, and to be adopted by the IDA (http://www.ida-group.org)) are “policies” on top

of the normal ethernet protocol. Hence, despite their names, they will at most be soft real time. 73

Chapter 8. RTAI: the features

This Chapter introduces the RTAI real-time operating system, as an illustration of the concepts and

terminology introduced in the previous Chapters. It describes which features are available in RTAI, and

how the API looks like. This Chapter doesn’t aim to be a reference or user manual of all RTAI

commands; you should look for those manuals you on the RTAI webpage.

(http://www.aero.polimi.it/~rtai/documentation/index.html)

8.1. Overview

RTAI consists of five complementary parts:

1. The HAL (Hardware Abstraction Layer) provides an interface to the hardware, on top of which both

Linux and the hard real-time core can run.

2. The Linux compatibility layer provides an interface to the Linux operating system, with which RTAI

tasks can be integrated into the Linux task management, without Linux noticing anything.

3. RTOS core. This part offers the hard real-time functionality for task scheduling, interrupt processing,

and locking. This functionality is not really different from other real-time operating systems.

4. LX/RT (Linux Real-Time). The modularity offered by a Hardware Abstraction Layer separated from

a core built on top of it, is used in other operating systems too, e.g., eCos. The particular thing about

RTAI is the LX/RT component, that makes soft and hard real-time features available to user space

tasks in Linux. LX/RT puts a strong emphasis on offering a symmetric real-time API: the same

real-time functionality should be useable with the same function calls from user space as well as

from kernel space. And also the IPC that LX/RT offers between user space and kernel space

real-time tasks works with a symmetric API.

5. Extended functionality packages. The core is extended with useful extras, such as: several forms of

inter-process communication, network and serial line drivers; POSIX interface; interfaces to

domain-specific third-party toolboxes such as Labview, Comedi (Section 7.4) and Real-Time

Workshop; software watchdogs; etc.

This Chapter explains what features are available in each of these major RTAI parts, as of the 24.1.9

version of RTAI (May 2002). Details about the exact function prototypes can be found in the RTAI

reference manual. The following Chapter discusses their implementation. The discussion is categorized

according to the contents of the previous Chapters of this document. In summary, the feature set of RTAI

is quite complete, offering almost all previously presented concepts. RTAI also implements some

POSIX parts (Section 1.5): it has POSIX 1003.1c compliant pthreads, mutexes and condition variables,

and POSIX 1003.1b compliant pqueues. But POSIX compliance is not high on the priority list of new

developments. (A property that RTAI shares with standard Linux development, by the way.)

8.2. Task management and scheduling

Summary: RTAI offers the whole variety of real-time tasks and schedulers. Besides normal tasks that end

74

Chapter 8. RTAI: the features

up in the scheduler queues of the operating system, RTAI offers also non-schedulable units of execution:

tasklets, timers, ASRs, and queue blocks.

8.2.1. Task management

RTAI has its own specific API, but offers POSIX wrappers for threads. A task is created with the

following function:

int rt_task_init (

RT_TASK *task,

void (*rt_thread)(int),

int data,

int stack_size,

int priority,

int uses_fpu,

void(*signal)(void)

);

The function’s arguments are:

is a pointer to an type structure whose space must be provided by the application. It

task RT_TASK

• must be kept during the whole lifetime of the real time task.

is the entry point of the task function. The parent task can pass a single integer value

rt_thread

• to the new task.

data is the size of the stack to be used by the new task.

stack_size

• is the priority to be given the task. The highest is 0, while the lowest is

priority priority

• RT_LOWEST_PRIORITY.

is a flag. Nonzero value indicates that the task will save the floating point registers at

uses_fpu

• context switches. On a (multi) uni-processor, a real-time task does not save its floating point context

by default. However, when the task is created for a symmetric multi-processing system, the floating

point context is saved, because the task’s context must be save against CPU migration anyway.

is an “ASR” function (Section 3.4) that is called, within the task environment and with

signal

• interrupts disabled, when the task becomes the current running task after a context switch.

Here is a typical RTAI code for creating and starting a real-time task, from within an ,

init_module()

that periodically runs the function whose code is in the application dependent function :

fun()

#define STACK_SIZE 4096 ➊

static RT_TASK mytask;

int init_module(void)

{ ➋

rt_task_init(&mytask, fun, 0, STACK_SIZE, 0, 1, 0); ➌

rt_set_runnable_on_cpus(&mytask, ...); ➍

rt_linux_use_fpu(1); ➎

now = rt_get_time(); 75

Chapter 8. RTAI: the features

rt_task_make_periodic( \

&mytask, now + 2000, 100*1000*1000);

return 0;

}

// function that runs periodically in

// the created real-time task: ➐

static void fun(int t) {

...

while (...) {

... // do what has to be done each period ➑

rt_task_wait_period();

}

}

➊ The task’s data structure.

➋ Initialize the task’s data structures, giving it, among other things, values for stack space and static

priority.

➌ Ask OS to run the task on a selection of specified processors.

➍ Ask the OS to save the floating point registers when switching contexts.

➎ Read in the current absolute time.

➏ Ask the OS to run this task periodically. This call also sets the first time instant that the thread wants

to become active, and its period. These times are in nanoseconds.

➐ The function to be run in the real-time task.

➑ Go to sleep until the schedulers wakes you up when your timer expires.

Using RTAI’s wrappers for POSIX pthreads, a typical skeleton for task (“thread”) creation and

activation looks like this:

static pthread_t thread; // POSIX data structures for task,

static pthread_mutex_t mutex; // ... mutex,

static pthread_cond_t cond; // ... and condition variable,

int init_module(void)

{ pthread_attr_t attr; // POSIX data structure for

// task properties

pthread_attr_init (&attr); // initializations

pthread_mutex_init (&mutex, NULL); // ...

pthread_cond_init (&cond, NULL); // ... ➊

pthread_create (&thread, &attr, fun, 0);

... // doing stuff until it’s time to delete thread

pthread_mutex_lock (&mutex); ➋

pthread_cond_signal (&cond); 76

Chapter 8. RTAI: the features

pthread_mutex_unlock (&mutex);

...

}

// function that runs periodically in

// the created real-time task:

static void fun(int t) {

#define TIME_OUT 10000000

...

struct sched_param p;

p.sched_priority = 1; ➌

pthread_setschedparam (pthread_self(), SCHED_FIFO, &p);

...

while (1) {

time = ...;

pthread_mutex_lock (&mutex);

pthread_cond_timedwait (&cond, &mutex, time+TIME_OUT));➍

pthread_mutex_unlock (&mutex);

... // do what has to be done each period

}

}

➊ Here, the task is created, i.e., its data structures and function to execute are initialized.

➋ The created task is signaled; the signal is typically the notification that the thread should stop itself,

in a clean way.

➌ The thread itself fills in its scheduling properties. (This could also be done by another task.)

➍ This command makes the task sleep until the specified next wake-up time, or until it receives the

signal to clean up. (This signal could have another, task-dependent interpretation too, of course.)

The function is used to, both, wait for a time to expire, and for a

pthread_cond_timedwait()

condition to be signaled (Section 4.7):

int pthread_cond_timedwait(

pthread_cond_t *cond, // condition variable

pthread_mutex_t *mutex, // mutex to protect scope

struct timespec *abstime);// absolute time to wake up

So, the semantics of the pure RTAI and the POSIX implementations are not exactly the same. Another

difference between both versions is that a POSIX thread initialization makes the task active immediately,

while the task created by a is suspended when created, and must be activated

rt_task_init()

explicitly. (This is achieved by the second argument in : it specifies the

rt_task_make_periodic()

time when the task will be first woken up.) 77

Chapter 8. RTAI: the features

8.2.2. Tasklets

The data structure to hold the status and the data connected to a tasklet (Section 2.3) is created with the

following function:

struct rt_tasklet_struct *rt_init_tasklet(void)

It is configured with

int rt_insert_tasklet(

struct rt_tasklet_struct *tasklet, // data structure

int priority, // static priority

void (*handler)(unsigned long), // function to execute

unsigned long data, // data to pass to handler

unsigned long id, // user-defined identifier

int pid) // OS process identifier

There also exist function calls with which one can set most of the above-mentioned properties separately.

RTAI executes the tasklets before it runs its scheduler. And tasklets can set priorities to influence the

order in which the operating system executes them. An application can also execute a tasklet explicitly

(or rather, wake it up for execution) by a function call. Tasklets do not

rt_tasklet_exec(tasklet)

save their floating point registers by default.

8.2.3. Timers

These are nothing else but timed tasklets, so its interface functions have the same semantics as those of

tasklets. is in fact a copy of the function. The major

rt_init_timer() rt_init_tasklet()

difference lies in the function, which inserts the timer tasklet in a list of timers to

rt_insert_timer()

be processed by a time manager task. This function has two more parameters than

, which give the tasklet the semantics of a timer:

rt_insert_tasklet

int rt_insert_timer(

struct rt_tasklet_struct *timer,

int priority,

RTIME firing_time, // fire time

RTIME period, // period, if timer must be periodic

void (*handler)(unsigned long),

unsigned long data,

int pid)

The parameter is not needed, because the timer tasklet will never be referred to as a “real” task

pid

anyway, i.e., as a task that is scheduled by the scheduler. So, some of the fields in the timer data structure

(which is equal to the tasklet data structure) are not used. The timer list is ordered according to the

desired fire time of the timer tasklets. The time manager always inherits the priority of the

highest-priority timer. 78

Chapter 8. RTAI: the features

8.2.4. ASR

Via the parameter of , the application programmer can register a function

signal rt_task_init()

that will be executed whenever the task it belongs to will be scheduled, and before that task is scheduled.

This is the functionality of what is sometimes called an Asynchronous Service Routine in other operating

systems (Section 3.4). An ASR is different from a tasklet, in the following sense:

the ASR’s function is executed in the context of the task it belongs to, while a tasklet has its own

• context.

The ASR is run with interrupts disabled. (This is not always the case for ASRs in other operating

• systems.)

The ASR is not a schedulable task itself, i.e., it will never show up in the scheduling queues, just like

• the timer tasklets.

8.2.5. Queue blocks

(TODO: what is the real use of queue blocks? Seems to be a primitive that somebody happened to have

implemented (on QNX) and ported to RTAI without filling a real need?)

Queue blocks are simple structures that contain a pointer to a function and the time at which the function

must be executed. The queue blocks are linked into a list and a family of functions are provided to

manage the whole thing. The functions are of the type void (*handler)(void *data, int

, and therefore the simple structures also include the arguments data and event. The application

event)

may or may not use any of the arguments.

8.2.6. Task scheduling

RTAI provides several complementary scheduling configuration options:

Depending on the hardware, the following scheduling options are available: uni-processor scheduling

• (UP), multi-processor scheduling (MUP; the application programmer can assign each task to a

specific (set of) processors), and symmetric multi-processor systems (SMP; the scheduler assigns

tasks at run-time to any available processor).

Tasks can configure periodic scheduling (scheduled every time a certain time has elapsed) and

• one-shot scheduling (scheduled only once at the requested time).

RTAI has static priority-based scheduling (“SCHED_FIFO”) as its default hard real-time scheduler,

• but if offers also Round Robin time-sliced scheduling (“SCHED_RR”), Rate Monotonic Scheduling,

and Earliest Deadline First. It’s the responsibility of the application programmer to get the scheduler

and timings choices correct. When multiple scheduler schemes are used, RTAI has made the

(arbitrary) choice to give EDF tasks a higher priority than tasks scheduled with other policies. 79

Chapter 8. RTAI: the features

By definition (Section 2.5), only and can be chosen on a per task basis, and

SCHED_FIFO SCHED_RR

with a per task quantum time (only relevant for SCHED_RR):

rt_set_sched_policy(

RT_TASK *task, // pointer to task’s data structure

int policy, // 0: RT_SCHED_FIFO, 1: RT_SCHED_RR

int rr_quantum_ns // RR time slice in nanoseconds, lying between

// 0 (= default Linux value) and

// 0x0FFFFFFF (= 1/4th of a second)

),

(Needing Round Robin scheduling in an application program should be considered as an indication that

the program logic is poorly designed. . . ) The EDF and RMS schedulers need global information about

the task timings, so the procedures are a little bit more complex:

RMS: the RMS scheduler is (re)initialized by the function , to be

void rt_spv_RMS(int cpuid)

• called after the operating system knows the timing information of all your tasks. That is, after you

have made all of your tasks periodic at the beginning of your application, or after you create a periodic

task dynamically, or after changing the period of a task. The parameter of the function

cpuid

is only used by the multi uni-processor scheduler.

rt_spv_RMS()

EDF: this scheduler must know the start and termination times of all your tasks, so a task must call

• the function

void rt_task_set_resume_end(RTIME resume_time, RTIME end_time);

at the end of every run of one cycle of the task.

RTAI provides several function calls that influence task scheduling ( ):

ABCscheduler/rtai_sched.c

void rt_task_yield(void);

// stops the current task and takes it at the end of the list of

// ready tasks, with the same priority. The scheduler makes the

// next ready task of the same priority active.

int rt_task_suspend(RT_TASK *task);

// suspends execution of the "task". It will not be executed

// until a call to "rt_task_resume()" or

// "rt_task_make_periodic()" is made.

int rt_task_resume(RT_TASK *task);

// resumes execution of the "task" previously suspended by

// "rt_task_suspend()" or makes a newly created task ready to run.

int rt_task_make_periodic(

RT_TASK *task,

RTIME start_time,

RTIME period);

// mark the "task" as available for periodic execution, with

// period "period", when "rt_task_wait_period()" is called.

// The time of the task’s first execution is given by

// "start_time", an absolute value measured in clock ticks. 80

Chapter 8. RTAI: the features

int rt_task_make_periodic_relative_ns(

RT_TASK *task,

RTIME start_delay,

RTIME period);

// As "rt_task_make_periodic", but with "start_delay" relative

// to the current time and measured in nanosecs.

void rt_task_wait_period(void);

// suspends the execution of the currently running task until

// the next period is reached. The task must have been previously

// marked for execution with "rt_task_make_periodic()" or

// "rt_task_make_periodic_relative_ns()".

// The task is suspended only temporarily, i.e. it simply gives up

// control until the next time period.

void rt_task_set_resume_end_times(RTIME resume, RTIME end);

int rt_set_resume_time(RT_TASK *task, RTIME new_resume_time);

int rt_set_period(RT_TASK *task, RTIME new_period);

RTIME next_period(void);

// returns the time when the caller task will run next.

void rt_busy_sleep(int ns);

// delays the execution of the caller task without giving back

// the control to the scheduler. This function burns away CPU

// cycles in a busy wait loop. It may be used for very short

// synchronization delays only. "nanosecs" is the number of

// nanoseconds to wait.

void rt_sleep(RTIME delay);

// suspends execution of the caller task for a time of

// "delay" internal count units. During this time the CPU is

// used by other tasks.

// A higher priority task or interrupt handler can run

// during the sleep, so the actual time spent in this function

// may be longer than the specified time.

void rt_sleep_until(RTIME time);

// similar to "rt_sleep", but the parameter "time" is the

// absolute time untill when the task is suspended. If the

// given time is already passed this call has no effect.

int rt_task_wakeup_sleeping(RT_TASK *task);

The status of a task can be found with:

int rt_get_task_state (RT_TASK *task);

The task state is formed by the bitwise OR of one or more of the following flags:

task is ready to run (i.e. unblocked).

READY:

• 81

Chapter 8. RTAI: the features

task is suspended.

SUSPENDED:

• task waits for its next running period or expiration of a timeout.

DELAYED:

• task is blocked on a semaphore.

SEMAPHORE:

• task sent a message and waits for the receiver task.

SEND:

• task waits for an incoming message.

RECEIVE:

• task sent a Remote Procedure Call and the receiver has not got it yet.

RPC:

• : task waits for reply to a Remote Procedure Call.

RETURN

The returned task state is just an approximative information. Timer and other hardware interrupts may

cause a change in the state of the queried task before the caller could evaluate the returned value. The

caller should disable interrupts if it wants reliable info about another task.

A task can find its own task data structure with:

RT_TASK *rt_whoami (void);

Tasks can choose whether or not to save floating point registers at context switches:

int rt_task_use_fpu (RT_TASK* task, int use_fpu_flag);

// informs the scheduler that floating point arithmetic

// operations will be used by the "task".

// If "use_fpu_flag" has a nonzero value, FPU context is also

// switched when task or the kernel become active. This makes task

// switching slower. The initial value of this flag is set by

// "rt_task_init()" when the real time task is created. By default,

// a Linux "task" has this flag cleared. It can be set with the

// "LinuxFpu" command line parameter of the "rtai_sched" module.

void rt_linux_use_fpu (int use_fpu_flag);

// informs the scheduler that floating point arithmetic

// operations will be used in the background task (i.e.,

// the Linux kernel itself and all of its processes).

8.2.7. Getting the time

RTAI provides several function calls for getting the current time ( ):

ABCscheduler/rtai_sched.c

RTIME rt_get_time(void)

RTIME rt_get_time_cpuid(unsigned int cpuid)

RTIME rt_get_time_ns(void)

RTIME rt_get_time_ns_cpuid(unsigned int cpuid)

RTIME rt_get_cpu_time_ns(void) 82

Chapter 8. RTAI: the features

The time is given in “ticks”, or in nanoseconds. The parameter indicates the number of the CPU

cpuid

in a multi-processor system, and these calls (and ) read the local Time Stamp

rt_get_cpu_time_ns

Clock, instead of the external timer chip. The latter has usually a lower resolution.

8.3. Interrupts and traps

An interrupt handler (Section 3.3) must be registered with the operating system via a call of the

following function:

int rt_request_global_irq (

unsigned int irq,

void (*handler)(void)

);

This call installs the function as the interrupt service routine for IRQ level is

handler irq. handler

then invoked whenever interrupt number occurs. The installed handler must take care of properly

irq

activating any Linux handler using the same irq number, by calling the void rt_pend_linux_irq

function, which “pends” the interrupt to Linux (in software!). This means that

(unsigned int irq)

Linux will process the interrupt as soon as it gets control back from RTAI. Note that, at that time,

hardware interrupts are again enabled for RTAI. The use of does only make

rt_pend_linux_irq()

sense for edge-triggered interrupts (Section 3.2): the level-triggered one is still active, unless you have

acknowledged it already explicitly.

From an RTAI task, one can also register an interrupt handler with Linux, via

int rt_request_linux_irq (

unsigned int irq,

void (*handler)(int irq, void *dev_id, struct pt_regs *regs),

char *linux_handler_id,

void *dev_id

);

This forces Linux to share the interrupt. The handler is appended to any already existing Linux handler

for the same irq and run as a Linux irq handler. The handler appears in , under the

/proc/interrupts

name given in the parameter The parameter is passed to the interrupt

linux_handler_id. dev_id

handler, in the same way as the standard Linux irq request call.

void rt_request_timer (

void (*handler)(void),

int tick,

int apic

);

registers the as the ISR of a timer interrupt. If is zero, the timer is executed only once.

handler tick

If is nonzero, the local APIC is used (Section 3.2). The difference with the timer tasklets (Section

apic

8.2.3) is that the latter are not directly registered as an interrupt handler, but executed by a timer manager

(which is itself woken up by a timer). 83

Chapter 8. RTAI: the features

Floating point register saving is on by default in RTAI interrupt handlers. The DSR functionality (Section

3.4) is available through tasklets, and ASR functionality through the parameter. One can also

signal()

select which CPU must receive and handle a particular IRQ, via the rt_assign_irq_to_cpu(int

function. resets this choice, back to the

irq, int cpu) rt_reset_irq_to_sym_mode(int irq)

symmetric “don’t care” behaviour.

In RTAI, application programmers must explicitly enable interrupts themselves, via .

rt_irq_enable()

Whether this is done in the ISR or in the DSR depends on the hardware of the application: if it has an

interrupt ready immediately, enabling the interrupts in the ISR could cause recursive calls to the ISR,

possibly blocking the system.

Section 3.3 discussed the concept of traps and trap handlers . The API that RTAI offers is as follows:

// data structure of handler

typedef int (*RT_TRAP_HANDLER)(

int, // interrupt vec

int, // signal number

struct pt_regs *, // argument pointers that can be

// given to a trap handler (see Linux)

void * // data pointer

); // fill in trap handler data structure:

int rt_trap_handler(

int vec,

int signo,

struct pt_regs *regs,

void *dummy_data

); // register trap handler:

RT_TRAP_HANDLER rt_set_task_trap_handler(

RT_TASK *task, // task which registers handler

unsigned int vec, // interrupt vec which triggers handler

RT_TRAP_HANDLER handler // data structure of handler

);

RTAI reserves 32 system signals, most of them correspond to what standard Linux uses. These signals

are denoted by “signo” in the code above, and are defined in the data structure

in the file , for i386 only. The default configuration

rtai_signr[NR_TRAPS] "arch/i386/rtai.c

policies of RTAI are: (i) to add the same handler to all traps, (ii) to trap the non-maskable interrupt of the

processor and let it do nothing (getting it in the first place indicates that something major has gone

wrong), and (iii) to suspend a task that calls a non-existing handler.

8.4. IPC: synchronization

Also in this area, RTAI offers the whole range of synchronization primitives: semaphore and mutex,

condition variable, and barrier or flags (“bits”). 84

Chapter 8. RTAI: the features

8.4.1. Semaphore and mutex

RTAI has counting semaphores, binary semaphores and recursive semaphores , Section 4.6.1.

semaphores can block tasks waiting on them in FIFO or priority order;

Semaphores in RTAI have the following API:

void rt_sem_init (SEM* sem, int value);

int rt_sem_signal (SEM* sem);

int rt_sem_wait (SEM* sem);

// version that returns immediately when not free:

int rt_sem_wait_if (SEM* sem);

// versions with a timeout:

int rt_sem_wait_until (SEM* sem, RTIME time); // absolute time

int rt_sem_wait_timed (SEM* sem, RTIME delay); // relative time

4.9).

RTAI semaphores have priority inheritance. and (adaptive) priority ceiling (Section

8.4.2. POSIX mutex

RTAI implements the standard POSIX mutexes (Section 4.6.2), with the prescribed priority inheritance.

The API is, of course, the standard POSIX API as presented in Section 4.6.2.

8.4.3. Spinlocks

Application programmers can choose from a wide variety of spinlocks, each with well-defined scope.

Basically, they look like the spinlocks in Linux, with a “ ” prefix, but using the same data structures.

rt_

But the RTAI spinlocks need an extra level with respect to Linux, because Linux runs on an hardware

simulation layer as soon as RTAI has been activated. Indeed, from that moment on, the Linux calls are

replaced by “soft” versions, in the sense that RTAI can always pre-empt critical Linux sections. Here is

the list of RTAI spinlocks:

unsigned long flags;

spinlock_t lock;

rt_spin_lock(&lock);

/* critical section in Linux (as the ‘spin_lock()’ there, hence

Linux’s (soft) interrupts still pass), but pre-emptable by RTAI.

*/

rt_spin_unlock(&lock);

rt_spin_lock_irq(&lock);

/* same as above but Linux’s soft interrupts disabled. */

rt_spin_unlock_irq(&lock);

flags = rt_spin_lock_irqsave(&lock);

/* critical section in RTAI with hardware interrupts disabled 85

Chapter 8. RTAI: the features

on current CPU. */

rt_spin_lock_irqrestore(flags,&lock);

The following locks don’t need a lock data structure, because they are drastic, and use a “global lock”

over all processors:

rt_global_cli();

/* critical section with interrupts disabled on the calling CPU,

and "global lock" for all CPUs. */

rt_global_sti();

flags = rt_global_save_flags_and_cli();

/* as "rt_global_cli()", but saves the state of the interrupt flag,

and the "global lock" flag. */

rt_global_restore_flags(flags);

flags = hard_lock_all();

/* Most drastic way of making the system safe from pre-emption by

interrupts.

On UP boxes is the same as "rt_global_save_flags_and_cli()"

above. On SMP locks out all the other CPUs, sending then an

IPI (inter-processor interrupt) signal. */

hard_unlock_all(flags);

The normal Linux spinlocks still work in RTAI, so be careful when using them, because they won’t

always offer the same protection in RTAI hard real ime as what you expect from knowing how they

behave in un-modified Linux.

8.4.4. Condition variable 4.7).

RTAI implements the standard POSIX condition variables (Section

8.4.5. Barrier/flags

RTAI has a barrier-like (Section 4.6.5) primitive, which it calls bits. It allows tasks to suspend on an

AND or OR combination of bits sets in a 32 bit mask called “BITS” ( ):

include/rtai_bits.h

struct rt_bits_struct {

struct rt_queue queue; // must be first in struct

int magic;

int type; // needed because BITS and semaphores share some things

unsigned long mask;

};

typedef struct rt_bits_struct BITS;

Tasks can read and write bits in this mask, and perform “wait” calls on the mask. The full API: is as

follows: 86

Chapter 8. RTAI: the features

#include <rtai_bits.h>

// basic bit operation functions, indicated by macros:

#define SET_BITS 0

#define CLR_BITS 1

#define SET_CLR_BITS 2

#define NOP_BITS 3

void rt_bits_init(BITS *bits, unsigned long mask)

// create and initialize the bits structure pointed to by "bits",

// setting bits mask to "mask".

int rt_bits_delete(BITS *bits)

// delete the "bits" data structure

unsigned long rt_get_bits(BITS *bits)

// get the actual value of the "bits" mask.

unsigned long rt_bits_signal(

BITS *bits,

int setfun,

unsigned long masks)

// execute "setfun" (which is any of the basic bits operations

// above: SET_BITS, etc.), oring/anding masks onto the actual

// bits mask, schedule any task blocked on "bits" if the new bits

// mask meets its request;

// returns the value of bits after executing setfun;

// in case of combined operations (AND and OR), "masks" is to be

// cast to a pointer of a two elements array of unsigned longs

// containing the masks to be used for the combined "setfun".

int rt_bits_reset(BITS *bits, unsigned long mask)

// unconditionally schedule any task blocked on "bits" and

// reset its mask to "mask";

// returns the value of bits mask before being reset to "mask".

int rt_bits_wait(

BITS *bits,

int testfun,

unsigned long testmasks,

int exitfun,

unsigned long exitmasks,

unsigned long *resulting_mask)

// test "bits" mask against "testmasks" according to "testfun"

// (which is any of the test functions above, e.g., SET_BIT, etc.);

// if the test is not satisfied block the task;

// whenever the condition is met, execute "exitfun:, and any bits

// operation above, using "exitmasks",

// save the the mask resulting after the whole processing in the

// variable pointed by "resulting_mask".

int rt_bits_wait_if(

BITS *bits, 87

Chapter 8. RTAI: the features

int testfun,

unsigned long testmasks,

int exitfun,

unsigned long exitmasks,

unsigned long *resulting_mask)

// as "rt_bits_wait",

// but does not block if "testfun" is not satisfied.

int rt_bits_wait_until(

BITS *bits,

int testfun,

unsigned long

testmasks,

int exitfun,

unsigned long exitmasks,

RTIME time,

unsigned long *resulting_mask)

// as "rt_bits_wait",

// but waits at most till "time" expires.

unsigned long rt_bits_wait_timed(

BITS *bits,

int testfun,

unsigned long testmasks,

int exitfun,

unsigned long exitmasks,

RTIME delay,

unsigned long *resulting_mask)

// as "rt_bits_wait_until",

// but waits at most for "delay" to meet the required condition.

8.5. IPC: data exchange.

RTAI has messages, mailboxes, and POSIX message queues (“pqueues”), including synchronous

5.3), FIFOs, Remote Procedure Calls, and shared memory.

message passing semantics (Section

8.5.1. Messages

RTAI makes the distinction between messages and mailboxes, as explained in Section 5.3. The messages

are the more primitive form, and in RTAI, the basic implementation of messages carry only a four byte

message in the call itself. So, no buffering must be provided. The API for this simple inter-task

messaging is:

RT_TASK* rt_send (RT_TASK* task, unsigned int msg);

// sends the message "msg" to the task "task".

// If the receiver task is ready to get the message, 88

Chapter 8. RTAI: the features

// "rt_send" returns immediately.

// Otherwise the caller task is blocked.

RT_TASK* rt_send_if (RT_TASK* task, unsigned int msg);

// sends the message “if possible”. If the receiver task is not

// ready, the sending task just continues.

// On success, "task" (the pointer to the task that received the

// message) is returned.

// If message has not been sent, 0 is returned.

RT_TASK* rt_send_until (RT_TASK* task, unsigned int msg, RTIME time);

RT_TASK* rt_send_timed (RT_TASK* task, unsigned int msg, RTIME delay);

// As "rt_send", but the sending is given up after either an

// absolute "time", or a relative "delay".

RT_TASK* rt_receive (RT_TASK* task, unsigned int *msg);

// gets a message from the "task", and stores it in the buffer "msg"

// that the caller task provides.

// If "task" is equal to 0, the caller accepts messages from any

// task. If there is a pending message, "rt_receive" returns

// immediately. Otherwise the caller task is blocked and queued up.

RT_TASK* rt_receive_if (RT_TASK* task, unsigned int *msg);

// as "rt_receive", but only “if possible”.

RT_TASK* rt_receive_until (RT_TASK* task, unsigned int *msg, RTIME time);

RT_TASK* rt_receive_timed (RT_TASK* task, unsigned int *msg, RTIME delay);

// as "rt_receive", but with time limits as in the send calls.

Blocking may happen in priority order or on a FIFO base. This is determined by an RTAI compile time

option MSG_PRIORD.)

More recently, RTAI got so-called extended messages. These are less efficient than their four-byte

cousins, but more flexible in that they allow messages of arbitrary size. To this end, the extended

message functions use a double buffer data structure:

struct mcb_t {

void *sbuf; // buffer for the sender

int sbytes; // number of bytes sent

void *rbuf; // buffer for the receiver

int rbytes; // number of bytes received

};

The following function prototypes are quite self-explanatory, with indicating the sender’s message

smsg

buffer, the sender’s message size, and and similarly for the receiver.

ssize rmsg rsize

RT_TASK *rt_sendx(RT_TASK *task, void *smsg, int ssize)

RT_TASK *rt_sendx_if(RT_TASK *task, void *smsg, int ssize)

RT_TASK *rt_sendx_until(

RT_TASK *task, 89

Chapter 8. RTAI: the features

void *smsg,

int ssize,

RTIME time)

RT_TASK *rt_sendx_timed(

RT_TASK *task,

void *smsg,

int ssize,

RTIME delay)

RT_TASK *rt_receivex(

RT_TASK *task,

void *msg,

int size,

int *truesize)

RT_TASK *rt_receivex_if(

RT_TASK *task,

void *msg,

int size,

int *truesize)

RT_TASK *rt_receivex_until(

RT_TASK *task,

void *msg,

int size,

int *truesize,

RTIME time)

RT_TASK *rt_receivex_timed(

RT_TASK *task,

void *msg,

int size,

int *truesize,

RTIME delay)

RT_TASK *rt_rpcx(

RT_TASK *task,

void *smsg,

void *rmsg,

int ssize,

int rsize)

RT_TASK *rt_rpcx_if(

RT_TASK *task,

void *smsg,

void *rmsg,

int ssize,

int rsize)

RT_TASK *rt_rpcx_until(

RT_TASK *task,

void *smsg, 90

Chapter 8. RTAI: the features

void *rmsg,

int ssize,

int rsize,

RTIME time)

RT_TASK *rt_rpcx_timed(

RT_TASK *task,

void *smsg,

void *rmsg,

int ssize,

int rsize,

RTIME delay)

T_TASK *rt_returnx(RT_TASK *task, void *msg, int size)

// ???

int rt_isrpcx(RT_TASK *task)

// ???

8.5.2. Mailboxes

RTAI supports mailboxes (Section 5.3). They are flexible in the sense that they allow to send any

message size by using any mailbox buffer size. The original implementation uses a FIFO (First In, First

Out) policy; a recent addition are “typed” mailboxes, that have a priority message delivery option.

Sending and receiving messages can be done with several policies:

Unconditionally: the task blocks until the whole message has passed.

• Best-effort: only pass the bytes that can be passed immediately.

• Conditional on availability: only pass a message if the whole message can be passed immediately.

• Timed: with absolute or relative time-outs.

The API for mailboxes is given in (of all places. . . ):

include/rtai_sched.h

struct rt_mailbox {

int magic; // identifier for mailbox data structure

SEM sndsem, // semaphores to queue sending...

rcvsem; // ... and receiving tasks.

RT_TASK *waiting_task, // pointer to waiting tasks

*owndby; // pointer to task that created mailbox

char *bufadr; // mailbox buffer

int size, // mailbox size

fbyte, // circular buffer first byte pointer

lbyte, // circular buffer last byte pointer

avbs, // bytes in buffer

frbs; // bytes free

spinlock_t lock; // lock to protect access to buffer

}; 91

Chapter 8. RTAI: the features

typedef struct rt_mailbox MBX;

int rt_typed_mbx_init(MBX *mbx, int size, int qtype);

// Initialize a mailbox "mbx" with a buffer of "size" bytes,

// queueing tasks according to the specified type: FIFO_Q, PRIO_Q and

// RES_Q.

int rt_mbx_init(MBX *mbx, int size);

// equivalent to rt_typed_mbx_init(mbx, size, PRIO_Q)

int rt_mbx_delete(MBX *mbx);

// Delete the mailbox "mbx".

int rt_mbx_send(MBX *mbx, void *msg, int msg_size);

// Send unconditionally, i.e. return when the whole message has

// been received or an error occured, to the mailbox "mbx", the

// message pointed by "msg", whose size is "msg_size" bytes.

// Returns the number of unsent bytes.

int rt_mbx_send_wp(MBX *mbx, void *msg, int msg_size);

// As "rt_mbx_send", but only available bytes.

// “_wp” stands for: “what possible.”

int rt_mbx_send_if(MBX *mbx, void *msg, int msg_size);

// Send to the mailbox "mbx" only if all "msg_size" bytes

// of "msg" can be received immediately.

// Returns the number of unsent bytes, i.e. either 0 or "msg_size".

// “_if” stands for: “if available.”

int rt_mbx_send_until(MBX *mbx, void *msg, int msg_size, RTIME time);

// As "rt_mbx_send", unless the absolute time dead-line "time"

// is reached.

int rt_mbx_send_timed(MBX *mbx, void *msg, int msg_size, RTIME delay);

// As "rt_mbx_send", unless the time-out "delay" has expired.

// Similar semantics for receiving message:

int rt_mbx_receive(MBX *mbx, void *msg, int msg_size);

int rt_mbx_receive_wp(MBX *mbx, void *msg, int msg_size);

int rt_mbx_receive_if(MBX *mbx, void *msg, int msg_size);

int rt_mbx_receive_until(MBX *mbx, void *msg, int msg_size, RTIME time);

int rt_mbx_receive_timed(MBX *mbx, void *msg, int msg_size, RTIME delay);

int rt_mbx_evdrp(MBX *mbx, void *msg, int msg_size);

// This is the “unsafe” version, that doesn’t protect against

// overwriting the circular message buffer.

// The name stands for “eventual dropping” of data. (???)

Typed mailboxes offer a functionality that is a superset of the mailboxes above, adding the following

features: 92

Chapter 8. RTAI: the features

Message broadcasting: a message is sent to all tasks that are pending on the same mailbox.

• Priority configuration: a urgent or normal wakeup policy can be set when creating the mailbox.

These features are achieved by adding a 1-byte type field to every message inserted in a typed mailbox.

So, when receiving it is possible to discriminate normal, urgent and broadcast messages. The type field is

silently removed by the receiving functions, so from the user point of view it is not visible. Users must

consider type fields only when specifying the types mailbox sizes.

The API for typed mailboxes is given in :

include/rtai_tbx.h

struct rt_typed_mailbox {

int magic;

int waiting_nr; // number of tasks waiting for a broadcast

SEM sndsmx, // semaphores to queue sending...

rcvsmx; // ... and receiving tasks.

SEM bcbsmx; // binary semaphore needed to wakeup the

// sleeping tasks when the broadcasting of a

// message is terminated

RT_TASK *waiting_task;

char *bufadr; // mailbox buffer

char *bcbadr; // broadcasting buffer

int size; // mailbox size

int fbyte; // circular buffer read pointer

int avbs; // bytes occupied

int frbs; // bytes free

spinlock_t buflock; // lock to protect buffer access

};

typedef struct rt_typed_mailbox TBX;

// The function prototypes are similar to normal mailboxes,

// with "_mbx_" replaced by "_tbx_". For example:

int rt_tbx_init(TBX *tbx, int size, int type);

int rt_tbx_send(TBX *tbx, void *msg, int msg_size)

// etc.

// Some functions are new:

int rt_tbx_broadcast(TBX *tbx, void *msg, int msg_size);

int rt_tbx_broadcast_if(TBX *tbx, void *msg, int msg_size);

int rt_tbx_broadcast_until(TBX *tbx, void *msg, int msg_size, RTIME time);

int rt_tbx_broadcast_timed(TBX *tbx, void *msg, int msg_size, RTIME delay);

int rt_tbx_urgent(TBX *tbx, void *msg, int msg_size);

int rt_tbx_urgent_if(TBX *tbx, void *msg, int msg_size);

int rt_tbx_urgent_until(TBX *tbx, void *msg, int msg_size, RTIME time);

int rt_tbx_urgent_timed(TBX *tbx, void *msg, int msg_size, RTIME delay);

The unconditional versions of mailbox communication correspond to synchronous message passing. 93

Chapter 8. RTAI: the features

8.5.3. POSIX message queues

RTAI supports standard POSIX message queues (Section 5.3).

8.5.4. FIFO

FIFOs are a basic IPC data exchange primitive, and well supported under RTAI. It offers an API for

kernel space FIFOs, and one for user space FIFOs:

struct rt_fifo_info_struct{

unsigned int fifo_number;

unsigned int size;

unsigned int opncnt;

char name[RTF_NAMELEN+1];

};

struct rt_fifo_get_info_struct{

unsigned int fifo;

unsigned int n;

struct rt_fifo_info_struct *ptr;

}; // initialize FIFO data structure:

int rtf_init(void);

/* Attach a handler to an RT-FIFO.

*

* Allow function handler to be called when a user process reads or

* writes to

* the FIFO. When the function is called, it is passed the fifo number

* as the

* argument.

*/

extern int rtf_create_handler(unsigned int fifo, /* RT-FIFO */

int (*handler)(unsigned int fifo) /* function to be called

*/);

Here is the skeleton of a user space task and a hard real-time task, that use a FIFO to communicate; the

other IPC primitives use similar skeletons.

// user space task:

int main(int argc,char *argv[])

{ int rtf, cmd;

int data[...]; 94

Chapter 8. RTAI: the features

double ddata[...];

... ➊

if ((rtf = open("/dev/rtf0", O_RDONLY)) < 0) {

fprintf(stderr, "Error opening /dev/rtf0\n");

exit(1);

} ➋

if ((cmd = open("/dev/rtf1", O_WRONLY)) < 0) {

fprintf(stderr, "Error opening /dev/rtf1\n");

exit(1);

} ➌

while (...) {

write(cmd, &data, ...); ➍

...

read(rtf, &ddata, ...);

...

};

...

return 0;

}

// module that creates hard real-time task:

#define RTF 0

#define CMD 1

static RT_TASK mytask;

int init_module(void)

{ ➎

rtf_create(RTF, 4000); ➏

rtf_create(CMD, 100);

rt_task_init(&mytask, fun, 0, STACK_SIZE, 0, 1, 0);

rt_set_runnable_on_cpus(&mytask, ...);

rt_assign_irq_to_cpu(TIMER_8254_IRQ, TIMER_TO_CPU);

rt_linux_use_fpu(1);

now = rt_get_time();

rt_task_make_periodic(&mytask, now + 2000, ...);

return 0;

}

// function run in real-time task:

static void fun(int t) {

...

while (...) {

cpu_used[hard_cpu_id()]++;

rtf_put(RTF, ..., ...);

rtf_get(CMD, ..., ...):

rt_task_wait_period();

}

}

➊ Opens first FIFO as a user space device.

➋ Opens second FIFO as a user space device. 95

Chapter 8. RTAI: the features

➌ Writes data in the FIFO.

➍ Reads data from the FIFO.

One can add a handler to a FIFO, via . One can also send a signal to notify

rtf_create_handler()

data availability, via . This handler and signal

rtf_set_async_sig(int fd, int signum)

functionality is not available for the other IPC primitives.

8.5.5. RPC

RTAI supports Remote Procedure Calls, Section 5.3. (Even over a network, In which case the user is

responsible for using appropriate hardware, of course. This text skips the details of this latter

functionality, because it falls outside of the scope of hard real-time systems.) The on-system RPC in

RTAI works as a “send/receive” message pair: a task sends a four-byte message to another task, and then

waits until a reply is received. The caller task is always blocked and queued up. Calling this a “Remote

Procedure Call” is a bit ambitious: the communicating tasks just send four bytes, and they have to agree

on a protocol that defines the meaning of these four bytes, and whether or not the message triggers the

execution of a procedure call at the receiver’s end. The API for this form of RPC is:

RT_TASK *rt_rpc(

RT_TASK *task,

unsigned int to_do,

unsigned int *reply);

// The receiver task may get the message with any "rt_receive_*"

// function. It can send the answer with "rt_return()".

// "reply" points to a buffer provided by the caller.

RT_TASK *rt_return(

RT_TASK *task,

unsigned int reply);

RT_TASK *rt_rpc_if(

RT_TASK *task,

unsigned int to_do,

unsigned int *result);

RT_TASK *rt_rpc_until(

RT_TASK *task,

unsigned int to_do,

unsigned int *result,

RTIME time);

RT_TASK *rt_rpc_timed(

RT_TASK *task,

unsigned int to_do,

unsigned int *result,

RTIME delay);

int rt_isrpc(RT_TASK *task); 96

Chapter 8. RTAI: the features

// After receiving a message, by calling "rt_isrpc" a task

// can find out whether the sender task "task" is waiting for

// a reply or not.

// "rt_return" is intelligent enough to not send an answer to

// a task which is not waiting for it. Therefore using "rt_isrpc"

// is not necessary and discouraged.

The meaning of the suffixes “_if”, “_until”, and “_timed” is as in the APIs of messages and

mailboxes.

8.6. Memory management

Shared memory implementation in . Again symmetric. Dynamic memory management;

shmem

(TODO: more details.)

8.7. Real-time device drivers

spdrv, rtnet, plus strong integration with Comedi.

(TODO: more details.)

8.8. interface

/proc

The interface is an extension to the standard Linux interface feature: files under the

/proc /proc

subdirectory give status and debug information of the currently active RTAI modules.

/proc/rtai

These files are activated when the associated module is inserted into the kernel.

interface code can be found in most RTAI source files. It’s a non real-time feature (hence, only to

/proc

be used by normal user space tasks), but it requires support from the real-time kernel; this support is

implemented again via traps.

8.9. RTAI loadable modules

RTAI’s functionality is made available by dynamically loading modules into the running (and patched)

Linux kernel. Every module extends the API of the kernel with some new “objects” (i.e., function calls

and data structures). Not all modules are needed in all cases, but, vice versa, dependencies exist between

modules, i.e., in order to use functionality in one module, one often also needs to load other modules first.

97

Chapter 8. RTAI: the features

core module , and made in .

rtai rtai.c rtaidir

Scheduler module .

ABCscheduler/rtai_sched.c

Tasklet module: allocates and initializes the data structures for the tasklet and timer queues; starts the

task, that is responsible for the execution of the timers;

timers_manager

Scheduler module .

...

Extra scheduler module .

...

RTAI utilities module .

...

Types mailboxes module .

...

pthreads module .

...

Memory manager module .

...

FIFOs module .

fifos/rtai_fifos.c

LX/RT module .

lxrt/lxrt.c

Serial line module .

spdrv/rtai_spdrv.c

C++ module .

...

Network RPC module .

net_rpc/net_rpc.c

Tracing module .

trace/rtai_trace.c

Watchdog module .

watchdog/rtai_watchdog.c

Bits module .

bits/rtai_bits.c

(TODO: explain contents of the different RTAI modules; dependencies: what must be loaded in order to

use the different functionalities mentioned above?) 98

Chapter 8. RTAI: the features

8.10. Specific features

RTAI has developed a number of features that common real-time operating systems miss:

LX/RT is the component that allows user space tasks to execute soft and hard real-time functions.

• Because this feature is quite extensive, section Section 11.5 gives more details.

Dynamic memory allocation, also by real-time tasks. (TODO: give details.)

• Integration of the Linux Trace Toolkit (http://www.opersys.com/LTT/index.html), which allows to

• trace (i.e., log to a buffer) a large number of activities from the kernel: interrupts, scheduling, creation

of tasks, etc. (TODO: give details.)

C++ support, Chapter 12.

• 99

Chapter 9. Linux-based real-time and

embedded operating systems

This Chapter presents “spin-offs” of the standard Linux kernel that provide hard real-time performance,

or that are targeted to embedded use.

9.1. Introduction

There are two major developments at the RTOS level: RTLinux and RTAI. RTAI forked off an earlier

version of RTLinux. RTLinux and RTAI do basically the same thing (and do it with industrial strenght

quality, except maybe for documentation. . . ), they make their sources available, they have partial POSIX

compliance, but they don’t use compatible APIs. In the embedded (but non real-time) Linux world,

projects have emerged, such as uCLinux, and Etlinux. But probably standard Linux is the major

workhorse here, thanks to its great configurability.

9.2. RTLinux: Real-Time Linux

RTLinux (http://www.rtlinux.com) is a patch for the standard Linux kernel (often called the “vanilla”

Linux kernel), for single as well as for multi-processor kernels. It offers all components of a hard

real-time system in a multi-threaded real-time kernel, in which standard Linux is the lowest-priority

thread. One advantage (or disadvantage, depending on your taste) of this approach is that real-time space

and Linux space (both kernel space and user space) are strictly separated: programmers have to specify

explicitly which of their tasks should run with real-time capabilities, and which others should not. This

separation also relieves the real-time kernel from “bookkeeping” tasks such as booting, device

initialization, module (un)loading, or dynamic memory allocation. None of these have real-time

constraints, hence they naturally belong to Linux and not RTLinux. From programming point of view,

most, but not all, functionality, habits and tools of Linux remain available at no cost, and the real-time

application can run and be debugged on the same computer on which it is developed, without the need

for cross-compilation tools. This makes “migration” for Linux users quite painless.

The disadvantage of a distribution in the form of a kernel patch is that this patch has (i) to be maintained

(by the RTLinux developers) over evolving kernel versions, and (ii) applied (by the users) each time they

upgrade their kernel. Pre-patched versions of some kernel versions are available from the RTLinux web

page. The RTLinux patch is minor: it provides a “virtual interrupt” emulation to standard Linux, and

offers a kernel space micro-kernel with real-time scheduled threads. RTLinux intercepts all hardware

interrupts, checks whether an interrupt is destined for a real-time service routine (and launches the

corresponding ISR if it is), or forwards them to Linux in the form of a virtual interrupt, which is held

until no real-time activity must run. In this scheme, Linux is never able to disable hardware interrupts.

RTLinux comes (after compilation) as a set of loadable modules within Linux: the core module with the

above-mentioned interrupt controller handler, a real-time scheduler (with static priorities), a timer 100


PAGINE

177

PESO

1.08 MB

AUTORE

Atreyu

PUBBLICATO

+1 anno fa


DESCRIZIONE DISPENSA

This Guide covers the fundamentals of (i) real-time and embedded operating systems (focusing mostly on the differences with general purpose operating systems such as Linux), and (ii) real-time programming. The emphasis is on Free Software and Open Source Software examples: RTAI, RTLinux, eCos, RT-EMS, uCLinux ..., with a more than proportional focus on RTAI. This text also talks about design issues, software patterns and frameworks for real-time applications. That is, the “high-level” aspects of these software projects. These higher levels are often poorly dealt with in publications on real-time programming, which leads to the unfortunate situation that still too many real-time programmers use only the powerful but dangerously unstructured API of their RTOS. Missing the chance to develop more structured, and, hence, more deterministic and more portable software systems. Both the low-level RTOS primitives, and the high-level design issues, are illustrated by the real-world example of a hard real-time core for feedback control and signal processing.


DETTAGLI
Corso di laurea: Corso di laurea magistrale in ingegneria delle telecomunicazioni
SSD:
Università: L'Aquila - Univaq
A.A.: 2011-2012

I contenuti di questa pagina costituiscono rielaborazioni personali del Publisher Atreyu di informazioni apprese con la frequenza delle lezioni di Sistemi embedded e studio autonomo di eventuali libri di riferimento in preparazione dell'esame finale o della tesi. Non devono intendersi come materiale ufficiale dell'università L'Aquila - Univaq o del prof Pomante Luigi.

Acquista con carta o conto PayPal

Scarica il file tutte le volte che vuoi

Paga con un conto PayPal per usufruire della garanzia Soddisfatto o rimborsato

Recensioni
Ti è piaciuto questo appunto? Valutalo!

Altri appunti di Sistemi embedded

Programmazione concorrente
Dispensa
Sistemi Embedded
Dispensa
SystemC
Dispensa
Sistemi Embedded
Dispensa