Che materia stai cercando?

TinyOS Appunti scolastici Premium

This book provides a brief introduction to TinyOS programming for TinyOS 2.0 (T2). While it goes into greater depth than the tutorials, there are several topics that are outside its scope, such as the structure and implementation of radio stacks or existing TinyOS libraries. It focuses on how to write nesC code, and explains the concepts and reasons... Vedi di più

Esame di Sistemi embedded docente Prof. L. Pomante



4.4. TASKS 35

Listing 4.14: Posting a task

This is how our data filter component might look like implemented with a task:

module FilterMagC {

provides interface StdControl;

provides interface Read<uint16_t>;

uses interface Timer<TMilli>;

uses interface Read<uint16_t> as RawRead;


implementation {

uint16_t filterVal = 0;

uint16_t lastVal = 0;

task void readDoneTask();

command error_t StdControl.start() {

return call Timer.startPeriodic(10);


command error_t StdControl.stop() {

return call Timer.stop();


event void Timer.fired() {



event void RawRead.readDone(error_t err, uint16_t val) {

if (err == SUCCESS) {

lastVal = val;

filterVal *= 9;

filterVal /= 10;

filterVal += lastVal / 10;



command error_t {

post readDoneTask();

return SUCCESS;


task void readDoneTask() {

signal Read.readDone(SUCCESS, filterVal);


} Listing 4.15: An improved implementation of PeriodicReaderC

When is called, FilterMagC posts treadDoneTask and returns immediately. At

some point later, TinyOS runs the task, which signals Read.readDone.


Tasks are non-preemptive. This means that only one task runs at any time, and TinyOS doesn’t interrupt

one task to run another. Once a task starts running, no other task runs until it completes. This means that

tasks run atomically with respect to one another. This has the nice property that you don’t need to worry

about tasks interfering with one another and corrupting each other’s data. However, it also means that tasks

should usually be reasonably short. If a component has a very long computation to do, it should break it

up into multiple tasks. A task can post itself. For example, the basic execution loop of the Mate bytecode

interpreter is a task that executes a few instructions of a thread and reposts itself.

It takes about 80 microcontroller clock cycles to post and execute a task. Generally, keeping task run

times to at most a few milliseconds is a good idea. Because tasks are run to completion, then a long-running

task or large number of not-so-long-running tasks can introduce significant latency (tens of milliseconds)

between a task post and its execution. This usually isn’t a big deal with application-level components. But

there are lower-level components, such as radio stacks, that use tasks. For example, if the packet reception

rate is limited by how quickly the radio can post tasks to signal reception, then a latency of 10ms will limit

the system to 100 packets per second.

Consider these two cases. In both, there are five processing components and a radio stack. The mote

processor runs at 8MHz. Each processing component needs to do a lot of CPU work. In the first case,

the processing components post tasks that run for 5ms and respost themselves to continue the work. In the

second case, the processing components post tasks that run for 500us and repost themselves to continue the


In the first case, the task posting overhead is 0.02%: 80 cycles overhead on 40,000 cycles of execution.

In the second case, the task posting overhead is 0.2%: 80 cycles overhead on 4,000 cycles of execution. So

the time to complete the executions isn’t significantly different. However, consider the task queue latency.

In the first case, when the radio stack posts a task to signal that a packet has been received, it expects to wait

around 25ms (5 processing tasks x 5ms each), limiting the system to 40 packets per second. In the second

case, when the radio stack posts the task, it expects to wait around 2.5ms (5 processing tasks x 500 us each),

limiting the system to 400 packets per second. Because the task posting cost is so low, using lots of short

running tasks improves the responsiveness of the system without introducing significant CPU overhead.

Of course, there’s often a tradeoff between lots of short tasks and the amount of state you have to allocate

in a component. For example, let’s say you want to encrypt a chunk of data. If the encryption operation

takes a while (e.g., 10 milliseconds), then splitting it into multiple task executions would improve the overall

system responsiveness. However, if you execute it in a single task, then you can allocate all of the state and

scratch space you need on the stack. In contrast, splitting it across tasks would require keeping this state and

4.4. TASKS 37

scratch space in the component. There is no hard rule on this tradeoff. But generally, long running tasks can

cause other parts of the OS to perform poorly, so should be avoided when necessary.

Programming Hint 2: Keep tasks short.

The post operation returns an error t. If the task is not already in the task queue, post returns SUCCESS.

If the task is in the task queue (has been posted but has not run yet), post returns FAIL. In either case, the

task will run in the future. Generally, if a component needs a task to run multiple times, it should have the

task repost itself.

Note: one of the largest differences between TinyOS 1.x and 2.0 is the task model. In 1.x, tasks have a

shared, fixed-size queue, and components can post a task multiple times. In 2.0, each task has a reserved

slot. This means that in 1.x, a task post can return FAIL if the task is not already in the queue, and two posts

back-to-back can both return SUCCESS.

Returning to the call to read, here are two possible implementations, which differ only slightly, but

demonstrate very different semantics:

command error_t {

return post readDoneTask();

} Listing 4.16: One–to–one binding of a split–phase call


command error_t {

post readDoneTask();

return SUCCESS;

} Listing 4.17: Many–to–one binding of a split–phase call

The first represents a calling semantics where there is a one-to-one mapping between successful calls

to read and the readDone event. The second represents a many-to-one calling semantics, where a single

readDone event can correspond to many read requests. The question is whether the user of the interface

is responsible for queueing based on whether returns SUCCESS, or whether it keeps internal

queueing state. In the former approach, the user can’t distinguish between “I’ll be free when you get an

event” and “I’m not free now, try later” unless it keeps state on whether it has a request pending. In the


latter approach, if the user wants to queue it also needs to keep state on whether it has a request pending, as

issuing another can confuse it (number of command calls != number of event signals). While

I personally prefer the latter approach, it’s a matter of taste. There are plenty of developers who prefer the

former approach. The important part is that an interface precisely states which semantics it follows.

4.5 Concurrency

Tasks allow software components to emulate the split-phase behavior of hardware. But they have much

greater utility than that. They also provide a mechanism to manage preemption in the system. Because

tasks run atomically with respect to one another, code that runs only in tasks can be rather simple: there’s

no danger of another execution suddenly taking over and modifying data under you. However, interrupts do

exactly that: they interrupt the current execution and start running preemptively.

In nesC and TinyOS, functions that can run preemptively, from outside task context, are labeled with the

async keyword: they run asynchronously with regards to tasks. A rule of nesC is that commands an async

function calls and events an async function signals must be async as well. That is, it can’t call a command

or event that isn’t async. A function that isn’t asynchronous is synchronous (often call “sync” for short).

By default, commands and events are sync: the async keyword specifies if they aren’t. Interface definitions

specify whether their commands and events are async or sync. For example, the Send interface is purely


interface Send {

command error_t send(message_t* msg, uint8_t len);

event void sendDone(message_t* msg, error_t error);

command error_t cancel(message_t* msg);

command void* getPayload(message_t* msg);

command uint8_t maxPayloadLength(message_t* msg);

} Listing 4.18: The Send interface

In contrast, the Leds interface is purely asynchronous:

interface Leds {

async command void led0On();

async command void led0Off();

async command void led0Toggle();

async command void led1On();

async command void led1Off();


async command void led1Toggle();

async command void led2On();

async command void led2Off();

async command void led2Toggle();

async command uint8_t get();

async command void set(uint8_t val);

} Listing 4.19: The Leds interface

All interrupt handlers are async, and so they cannot include any sync functions in their call graph. The

one and only way that an interrupt handler can execute a sync function is to post a task. A task post is an

async operation, while a task running is sync.

For example, consider a packet layer on top of a UART. When the UART receives a byte, it signals an

interrupt. In the interrupt handler, software reads the byte out of the data register and puts it in a buffer.

When the last byte of a packet is received, the software needs to signal packet reception. But the receive

event of the Receive interface is sync. So in the interrupt handler of the final byte, the component posts a

task to signal packet reception.

This raises the question: If tasks introduce latency, why use them at all? Why not make everything

async? The reason is simple: race conditions. The basic problem with preemptive execution is that it can

modify state underneath an ongoing computation, which can cause a system to enter an inconsistent state.

For example, consider this command, toggle, which flips the state bit and returns the old one:

bool state;

async command bool toggle() {

if (state == 0) {

state = 1;

return 1;


if (state == 1) {

state = 0;

return 0;


} Listing 4.20: Toggling a state variable

Now imagine this execution, which starts with state = 0:


state = 1;

-> interrupt



state = 0

return 0;

return 1; Listing 4.21: A call sequence that could corrupt a variable

In this execution, when the first toggle returns, the calling component will think that state is equal to 1.

But the last assignment (in the interrupt) was to 0.

This problem can be much worse when a single statement can be interrupted. For example, on micaZ

or Telos motes, writing or reading a 32 bit number takes more than one instruction. It’s possible that an

interrupt executes in between two instructions, so that part of the number read is of an old value while

another part is of a new value.

This problem — data races — is particularly pronounced with state variables. For example, imagine this

is a snippet of code from AMStandard, the basic packet abstraction in TinyOS 1.x, with a bunch of details

omitted. The state variable indicates whether the component is busy.

command result_t SendMsg.send ... {

if (!state) {

state = TRUE;

// send a packet

return SUCCESS;


return FAIL;

} Listing 4.22: State transition that is not async–safe

If this command were async, then it’s possible between the conditional “if (!state)” and the assignment

“state = TRUE” that another component jumps in and tries to send as well. This second call will see state

to be false, set state to true, start a send and return SUCCESS. But then the first caller will result, send state

to true again, start a send, and return SUCCESS. Only one of the two packets will be sent successfully, but

barring additional error checks in the call path, it can be hard to find out which one, and this might introduce

all kinds of bugs in the calling components. Note that the command isn’t async.

Programming Hint 3: Keep code synchronous when you can. Code should be async only if its timing is

very important or if it might be used by something whose timing is important.

The problems interrupts introduce means that programs need a way to execute snippets of code that

won’t be preempted. NesC provides this functionality through atomic statements. For example:


command bool increment() {

atomic {


b = a + 1;


} Listing 4.23: Incrementing with an atomic section

The atomic block promises that these variables can be read and written atomically. Note that this does

not promise that the atomic block won’t be preempted. Even with atomic blocks, two code segments that do

not touch any of the same variables can preempt one another. For example:

async command bool incrementA() {

atomic {


b = a + 1;



async command bool incrementC {

atomic {


d = c + 1;


} Listing 4.24: Incrementing with two independent atomic sections

In this example, incrementC (theoretically) preempt incrementA violating atomicity. But incrementA

can’t preempt itself, nor can incrementC itself.

nesC goes further than providing atomic sections: it also checks to see whether variables aren’t protected

properly and issues warnings when this is the case. For example, if b and c from the prior example didn’t

have atomic sections, then nesC would issue a warning because of possible self-preemption. The rule for

when a variable has to be protected by an atomic section is simple: if it is accessed from an async function,

then it must be protected. nesC’s analysis is flow sensitive. This means that if you have a function that does

not include an atomic block, but is always called from within an atomic block, the compiler won’t issue a

warning. Otherwise, you might have lots of unnecessarily nested atomic blocks. Usually, an atomic block

involves some kind of execution (e.g.. disabling an interrupt), so unnecessary atomics are a waste of CPU

cycles. Furthermore, nesC removes redundant atomic blocks.

While you can make data race warnings go away by liberally sprinkling your code with atomic blocks,

you should do so carefully. On one hand, an atomic block does have a CPU cost, so you want to minimize

how many you have. On the other, shorter atomic blocks delay interrupts less and so improve system


concurrency. The question of how long an atomic block runs is a tricky one, especially when your component

has to call another component.

For example, the SPI bus implementation on the Atmega128 has a resource arbiter to manage access to

the bus. The arbiter allows different clients to request the resource (the bus) and informs them when they’ve

been granted it. However the SPI implementation doesn’t want to specify the arbiter policy (e.g., first come

first served vs. priority), so it has to be wired to an arbiter. This decomposition has implications for power

management. The SPI turns itself off when it has no users, but it can’t know when that is without calling the

arbiter (or replicating arbiter state). This means that the SPI has to atomically see if it’s being used, and if

not, turn itself off:

atomic {

if (!call ArbiterInfo.inUse()) {



} Listing 4.25: Atomically turning off the SPI bus

In this case, the call to isUse() is expected to be very short (in practice, it’s probably reading a state

variable). If someone wired an arbiter whose inUse() command took 1ms, then this could be a problem. The

implementation assumes this isn’t the case. Sometimes (like this case), you have to make these assumptions,

but it’s good to make as few as possible.

The most basic use of atomic blocks is for state transitions within a component. Usually, a state transition

has two parts, both of which are determined by the existing state and the call: the first is changing to a new

state, the second is taking some kind of action. Returning to the AMStandard example, it looks something

like this:

if (!state) {

state = TRUE;

// send a packet

return SUCCESS;


else {

return FAIL;

} Listing 4.26: State transition that requires a large atomic section

If state is touched by an async function, then you need to make the state transition atomic. But you don’t

want to put the entire block within an atomic section, as sending a packet could take a long enough time that


it causes the system to miss an interrupt. So the code does something like this:

uint8_t oldState;

atomic {

oldState = state;

state = TRUE;


if (!oldState) {

//send a packet

return SUCCESS;


else {

return FAIL;

} Listing 4.27: A fast and atomic state transition

If state were already true, it doesn’t hurt to just set it true. This takes fewer CPU cycles than the

somewhat redundant statement of

if (state != TRUE) {state = TRUE;}

Listing 4.28: An unnecessary conditional

In this example, the state transition occurs in the atomic block, but then the actual processing occurs

outside it, based on the state the component started in.

Let’s look at a real example. This component is CC2420ControlP, which is part of the T2 CC2420 radio

stack. CC2420ControlP is responsible for configuring the radio’s various IO options, as well as turning it

on and off. Turning the CC2420 radio has four steps:

1. Turn on the voltage regulator (0.6ms)

2. Acquire the SPI bus to the radio (depends on contention)

3. Start the radio’s oscillator by sending a command over the bus (0.86ms)

4. Put the radio in RX mode (0.2ms)

Some of the steps that take time are split-phase and have async completion events (particularly, 1 and

3). The actual call to start this series of events, however, is SplitControl.start(), which is sync. One way to

implement this series of steps is to assign each step a state and use a state variable to keep track of where

you are. However, this turns out to not be necessary. Once the start sequence begins, it continues until it

completes. So the only state variable you need is whether you’re starting or not. After that point, every


completion event is implicitly part of a state. E.g., the startOscillatorDone() event implicitly means that the

radio is in state 3. Because SplitControl.start() is sync, the state variable can be modified without any atomic


command error_t SplitControl.start() {

if ( m_state != S_STOPPED )

return FAIL;

m_state = S_STARTING;

m_dsn = call Random.rand16();

call CC2420Config.startVReg();

return SUCCESS;

} Listing 4.29: The first step of starting the CC2420 radio

The startVReg() starts the voltage regulator. This is an async command. In its completion event, the

radio tries to acquire the SPI bus:

async event void CC2420Config.startVRegDone() {

call Resource.request();

} Listing 4.30: The handler that the first step of starting the CC2420 is complete

In the completion event (when it receives the bus), it sends a command to start the oscillator:

event void Resource.granted() {

call CC2420Config.startOscillator();

} Listing 4.31: The handler that the second step of starting the CC2420 is complete

Finally, when the oscillator completion event is signaled, the component tells the radio to enter RX mode

and posts a task to signal the startDone() event. It has to post a task because oscillatorDone is async, while

startDone is sync. Note that the component also releases the bus for other users.

async event void CC2420Config.startOscillatorDone() {

call SubControl.start();

call CC2420Config.rxOn();

call Resource.release();

post startDone_task();

} Listing 4.32: Handler that the third step of starting the CC2420 radio is complete


Finally, the task changes the radio’s state from STARTING to STARTED:

task void startDone_task() {

m_state = S_STARTED;

signal SplitControl.startDone( SUCCESS );

} Listing 4.33: State transition so components can send and receive packets

An alternative implementation could have been to put the following code in the startOscillatorDone()


atomic {

m_state = S_STARTED;

} Listing 4.34: An alternative state transition implementation

The only possible benefit in doing so is that the radio could theoretically accept requests earlier. But

since components shouldn’t be calling the radio until the startDone event is signaled, this would be a bit

problematic. There’s no chance of another task sneaking in between the change in state and signaling the

event when both are done in the startDone task.

Programming Hint 4: Keep atomic sections short, and have as few of them as possible. Be careful about

calling out to other components from within an atomic section.

4.6 Allocation

Besides power, the most valuable resource to mote systems is RAM. Power means that the radio and CPU

have to be off almost all the time. Of course, there are situations which need a lot of CPU or a lot of

bandwidth (e.g., cryptography or binary dissemination), but by necessity they have to be rare occurances.

In contrast, the entire point of RAM is that it’s always there. The sleep current of the microcontrollers most

motes use today is, for the most part, determined by RAM.

Modules can allocate variables. Following the naming scope rules of nesC, these variables are com-

pletely private to a component. For example, the PeriodicReaderC component allocated a lastVal and a

filterVal, both of which were two bytes, for a total cost of 4 bytes of RAM. Because tasks run to completion,

TinyOS does not have an equivalent abstraction to a thread or process. More specifically, there is no execu-

tion entity that maintains execution state beyond what is stored in components. When a TinyOS system is


quiescent, component variables represent the entire software state of the system.

The only way that components can share state is through function calls, which are (hopefully) part of

interfaces. Just as in C, are two basic ways that components can pass parameters: by value and by reference

(pointer). In the first case, the data is copied onto the stack, and so the callee can modify it or cache it freely.

In the second case, the caller and callee share a pointer to the data, and so components need to carefully

manage access to the data in order to prevent memory corruption and memory leaks. While it’s fine to pass

pointers as arguments, you have to be very careful about storing pointers in a component. The general idea is

that, at any time, every pointer should have a clear owner, and only the owner can modify the corresponding


For example, abstract data types (ADTs) in TinyOS are usually represented one of two ways: generic

modules or through an interface with by-reference commands. With a generic module, the module allocates

the ADT state and provides accessors to its internal state. For example, many TinyOS components needs to

maintain bit vectors, and so in tos/system there’s a generic module BitVectorC that takes the number of bits

as a parameter:

generic module BitVectorC( uint16_t max_bits ) {

provides interface Init;

provides interface BitVector;

} Listing 4.35: Signature of BitVectorC

This component allocates the bit vector internally and provides the BitVector interface to access it:

interface BitVector {

async command void clearAll();

async command void setAll();

async command bool get(uint16_t bitnum );

async command void set(uint16_t bitnum );

async command void clear(uint16_t bitnum );

async command void toggle(uint16_t bitnum );

async command void assign(uint16_t bitnum, bool value );

async command uint16_t size();

} Listing 4.36: The BitVector interface

With this kind of encapsulation, assuring that accesses to the data type are race-free is reasonably easy,

as the internal implementation can use atomic sections appropriately. There is always the possibility that

preempting modifications will lead to temporal inconsistencies with accessors. E.g., in BitVector, it’s pos-

sible that, after a bit has been fetched for get() but before it returns, an interrupt fires whose handler calls


set() on that same bit. In this case, get() returns after set(), but its return value is the value before set(). If

this kind of interlacing is a problem for your code, then you should call get() from within an atomic section.

Generic modules are discussed in depth in Chapter 7.

TinyOS 1.x uses only the second approach, passing a parameter by reference, because it does not have

generic modules. For example, the Mat virtual machine supports scripting languages with typed variables,

and provides functionality for checking and setting types. In this case, the ADT is a script variable. In the

interface MateTypes below, a MateContext* is a thread and a MateStackVariable* is a variable:

interface MateTypes {

command bool checkTypes(MateContext* context, MateStackVariable* var, uint8_t type);

command bool checkMatch(MateContext* context, MateStackVariable* v1, MateStackVariable* v2);

command bool checkValue(MateContext* context, MateStackVariable* var);

command bool checkInteger(MateContext* context, MateStackVariable* var);

command bool isInteger(MateContext* context, MateStackVariable* var);

command bool isValue(MateContext* context, MateStackVariable* var);

command bool isType(MateContext* context, MateStackVariable* var, uint8_t type);

} Listing 4.37: Representing an ADT though an interface in TinyOS 1.x

When a component implements an ADT in this way, callers have to be careful to not corrupt the the data

type. Between when a call is made to the ADT’s component and when that call returns, a component should

not modify the variable (i.e., call the ADT component again). In the MateTypes example above, this is easy,

as all of its commands are synchronous: no code that can preempt the call (async) can itself call MateTypes.

ADTs represent the simple case of when pointers are used: they are inevitably single-phase calls. You

don’t for example, expect a MateTypes.isType() command to have a MateTypes.isTypeDone() event. The

much trickier situation for pointers is when they involve a split-phase call. Because the called component

probably needs access to the pointer while the operation is executing, it has to store it in a local variable.

For example, consider the basic Send interface:

interface Send {

command error_t send(message_t* msg, uint8_t len);

event void sendDone(message_t* msg, error_t error);

command error_t cancel(message_t* msg);

command void* getPayload(message_t* msg);

command uint8_t maxPayloadLength(message_t* msg);

} Listing 4.38: The Send interface

The important pair of functions in this example is send/sendDone. To send a packet, a component calls


send. If send returns SUCCESS, then the caller has passed the packet to a communication stack to use, and

must not modify the packet. The callee stores the pointer in a variable, enacts a state change, and returns

immediately. If the interface user modifies the packet after passing it to the interface provider, the packet

could be corrupted. For example, the radio stack might compute a checksum over the entire packet, then

start sending it out. If the caller modifies the packet after the checksum has been calculated, then the data

and checksum won’t match up and a receiver will reject the packet. When a split-phase interface has this

kind of “pass” semantics, the completion event should have the passed pointer as one of its parameters.

Programming Hint 5: Only one component should be able to modify a pointer’s data at any time. In the

best case, only one component should be storing the pointer at any time.

One of the trickiest examples of this pass approach is the Receive interface. At first glance, the interface

seems very simple:

interface Receive {

event message_t* receive(message_t* msg, void* payload, uint8_t len);

command uint8_t getPayloadLength(message_t* msg);

command void getPayload(message_t* msg, uint8_t* len);


} Listing 4.39: The Receive interface

The receive event is rather different than most events: it has a message t* as both a parameter and a

return value. When the communication layer receives a packet, it passes that packet to the higher layer

as a parameter. However, it also expects the higher layer to return it a message t* back. The basic idea

behind this is simple: if the communication layer doesn’t have a message t*, it can’t receive packets, as it

has nowhere to put them. Therefore, the higher layer always has to return a message t*, which is the next

buffer the radio stack will use to receive into. This return value can be the same as the parameter, but it does

not have to be. For example, this is perfectly reasonable, if a bit feature-free, code:

event message_t* Receive.receive(message_t* msg, void* payload, uint8_t len) {

return msg;

} Listing 4.40: The simplest receive handler

A receive handler can always copy needed data out of the packet and just returned the passed buffer.

There are, however, situations when this is undesirable. One common example is a routing queue. If the

node has to forward the packet it just received, then copying it into another buffer is wasteful. Instead, a


queue allocates a bunch of packets, and in addition to a send queue, keeps a free list. When the routing layer

receives a packet to forward, it sees if there are any packets left in the free list. If so, it puts the received

packet into the send queue and returns a packet from the free list, giving the radio stack a buffer to receive

the next packet into. If there are no packets left in the free list, then the queue can’t accept the packet and so

just returns it back to the radio for re-use. The pseudocode looks something like this:

receive (m):

if I’m not the next hop, return m // Not for me

if my free list is empty, return m // No space


put m on forwarding queue

return entry from free list

One of the most common mistakes early TinyOS programmers encounter is misusing the Receive inter-

face. For example, imagine a protocol that does this:

event message_t* LowerReceive.receive(message_t* m, void* payload, uint8_t len) {


if (amDestimation(m)) {

signal UpperReceive.receive(m, payload, len);


return m;

} Listing 4.41: A broken receive handler that doesn’t respect buffer swapping

The problem with this code is that it ignores the return value from the signal to UpperReceive.receive.

If the component that handles this event performs a buffer swap — e.g., it has a forwarding queue — then

the packet it returns is lost. Furthermore, the packet that it has put on the queue has also been returned to the

radio for the next packet reception. This means that, when the packet reaches the end of the queue, the node

may send something completely different than what it decided to forward (e.g., a packet for a completely

different protocol).

The buffer swap approach of the Receive interface provides isolation between different communication

components. Imagine, for example, a more traditional approach, where the radio dynamically allocates a

packet buffer when it needs one. It allocates buffers and passes them to components on packet reception.

What happens if a component holds on to its buffers for a very long time? Ultimately, the radio stack will

run out of memory to allocate from, and will cease being able to receive packets at all. By pushing the


allocation policy up into the communication components, protocols that have no free memory left are forced

to drop packets, while other protocols continue unaffected.

This approach speaks more generally of how TinyOS components generally handle memory allocation.

All state is allocated in one of two places: components, or the stack. A shared dynamic memory pool

across components makes it much easier for one bad component to cause others to fail. That is not to say

that dynamic allocation is never used. For example, the TinyDB system and some versions the Mat virtua

machine both maintain a dynamic memory pool. Both of them have a component that allocates a block of

memory and provides an interface to allocate and free chunks within that block. However, both allocators

are shared by a relatively small set of components that are designed to work together: this is a much more

limited, and safer, approach than having routing layers, signal processing modules, and applications all share

a memory pool.

Programming Hint 6: Allocate all state in components. If your application requirements necessitate a

dynamic memory pool, encapsulate it in a component and try to limit the set of users.

Modules often need constants of one kind or another, such as a retransmit count or a threshold. Using a

literal constant is problematic, as you’d like to be able to reuse a consistent value. This means that in C-like

languages, you generally use something like this:

const int MAX_RETRANSMIT = 5;

if (txCount < MAX_RETRANSMIT) {


} Listing 4.42: Wasting memory by defining a constant as an integer

The problem with doing this in nesC/TinyOS is that a const int might allocate RAM, depending on the

compiler (good compilers will place it in program memory). You can get the exact same effect by defining

an enum:

enum {


}; Listing 4.43: Defining a constant as an enum

This allows the component to use a name to maintain a consistent value and does not store the value

either in RAM or program memory. This can even improve performance, as rather than a memory load, the


architecture can just load a constant. It’s also better than a #define, as it exists in the debugging symbol table

and application metadata.

Note, however, that using enum types in variable declarations can waste memory, as enums default to

integer width. For example, imagine this enum:

typedef enum {





} state_t; Listing 4.44: An example enum

Here are two different ways you might allocate the state variable in question:

state_t state; // platform int size (e.g., 2-4 bytes)

uint8_t state; // one byte Listing 4.45: Allocating a state variable

Even though the valid range of values is 0-3, the former will allocate a native integer, which on a

microcontroller is usually 2 bytes, but could be 4 bytes on low power microprocessors. The second will

allocate a single byte. So you should use enums to declare constants, but avoid declaring variables of an

enum type.

Programming Hint 7: Conserve memory by using enums rather than const variables for integer constants,

and and don’t declare variables with an enum type.


Chapter 5

Configurations and Wiring

The previous two chapters dealt with modules, which are the basic building blocks of a TinyOS program.

Modules allocate state and implement executable logic. However, like all components, they can only name

functions and variables within their local namespaces, as defined by their signatures. For one module to be

able to call another, we have to map a set of names in one component — generally, an interface — to a set of

names in another component. In nesC, connecting two components in this way is called wiring. In addition

to modules, nesC has a second kind of component, configurations, whose implementation is component

wirings. Modules implement program logic: configurations compose modules into larger abstractions.

In a TinyOS program, there are usually more configurations than modules. There are two reasons for

this. First, except low-level hardware abstractions, any given component is built on top of a set of other ab-

stractions, which are encapsulated in configurations. For example, a routing stack depends on a single-hop

packet layer, which is a configuration. This single-hop configuration wires the actual protocol implementa-

tion module (e.g., setting header fields) to a raw packet layer on top of the radio. This raw packet layer is a

configuration that wires the module which sends bytes out to the bus over which it sends bytes. The bus, in

turn, is a configuration. These layers of encapsulation generally reach very low in the system.

Essentially, encapsulating an abstraction A in a configuration means that it can be ready-to-use: all we

need to do is wire to A’s functionality. In contrast, if it were a module that uses interfaces, then we’d need

to wire up A’s dependencies and requirements as well. That’s sort of like having to link the Java libraries

against the C libraries every time you want to compile a Java program. For example, a radio stack can use a

really wide range of resources, including buses, timers, random number generators, cryptographic support,

and hardware pins. Rather than expecting a programmer to connect the stack up to all of these things, the

entire stack can be encapsulated in a single component. This component connects all of the subcomponents



to the abstractions they need.

In addition to wiring one component to another, configurations also need to export interfaces. This is

another kind of wiring, except that, rather than connect two end points — a provider and a user — an export

(also called pass-through wiring) maps one name to another. This idea is confusing at first, and is best

explained after a few examples, so we’ll return to it later.

Configurations look very similar to modules. They have a specification and an implementation. This is


the configuration LedsC, which presents the TinyOS abstraction of the ubiquitious 3 LEDs :

configuration LedsC {

provides interface Init @atleastonce();

provides interface Leds;


implementation {

components LedsP, PlatformLedsC;

Init = LedsP;

Leds = LedsP;

LedsP.Led0 -> PlatformLedsC.Led0;

LedsP.Led1 -> PlatformLedsC.Led1;

LedsP.Led2 -> PlatformLedsC.Led2;

} Listing 5.1: The LedsC configuration

A configuration states must name which components it is wiring with the keyword. Any


number of component names can follow and their order does not matter. A configuration


can have multiple statements. A configuration must name a component before it wires it.


Syntactically, configurations are very simple. They have three operators: and The first two

->, <- =.

are for basic wiring: the arrow points from the user to the provider. For example, the following two lines are


MyComponent.Random -> RandomC.Random;

RandomC.Random <- MyComponent.Random;

Listing 5.2: Wiring directionality

A direct wiring (a or always goes from a user to a provider, and resolves the call paths in both

-> <-)

directions. That is, once an interface is linked with a operator, it is considered connected. Here’s a


simple example, the Blink application:

1 Don’t worry about the @atleastonce attribute; attributes are discussed in Chapterch:advanced 55

module BlinkC {

uses interface Timer<TMilli> as Timer0;

uses interface Timer<TMilli> as Timer1;

uses interface Timer<TMilli> as Timer2;

uses interface Leds;

uses interface Boot;

} Listing 5.3: The signature of the BlinkC module

When BlinkC calls Leds.led0Toggle(), it names a function in its own local scope (BlinkC.Leds.led0Toggle).


The LedsC component provides the Leds interface :

configuration LedsC {

provides interface Init @atleastonce();

provides interface Leds;

} Listing 5.4: The signature of the LedsC configuration

BlinkC calls the function BlinkC.Leds.led0Toggle. LedsC provides the function LedsC.Leds.led0Toggle().

Wiring the two maps the first to the second:

configuration BlinkAppC {}

implementation {

components MainC, BlinkC, LedsC;

// Some code elided

BlinkC.Leds -> LedsC;

} Listing 5.5: The BlinkAppC configuration that wires BlinkC to LedsC

This means that when BlinkC calls BlinkC.Leds.led0Toggle, it actually calls LedsC.Leds.led0Toggle.

The same is true for other calls of the Leds interface, such as Leds.led1On. The configuration BlinkAppC

provides a mapping between the local namespaces of the two components. The operator maps between


two components that a configuration names, and is always from a user to a provider.

From the perspective of someone using a component, it shouldn’t be relevant whether it is a module

or a configuration. Just like modules, configurations can provide and use interfaces. But as they have no

code, these interfaces must be defined in terms of other components. Take, for example, ActiveMessageC,

the HIL for packet-level communication. Every hardware platform defines a component ActiveMessageC,

which the basic packet components (AMSenderC, AMReceiverC, etc.) wire to. Generally, ActiveMessageC

2 Don’t worry about the @atleastonce attribute on Init; we discuss it in Chapter 9. It’s basically a way to make sure somebody

initializes the Leds.


is just a configuration that renames a particular radio chip’s active message layer. For example, this is the


ActiveMessageC of the Telos platform :

configuration ActiveMessageC {

provides {

interface Init;

interface SplitControl;

interface AMSend[uint8_t id]

interface Receive[uint8_t id];

interface Receive as Snoop[uint8_t id];

interface Packet;

interface AMPacket;

interface PacketAcknowledgements;



implementation {

components CC2420ActiveMessageC as AM;

Init = AM;

SplitControl = AM;

AMSend = AM;

Receive = AM.Receive;

Snoop = AM.Snoop;

Packet = AM;

AMPacket = AM;

PacketAcknowledgements = AM;

} Listing 5.6: The ActiveMessageC Configuration

All ActiveMessageC does is take CC2420ActiveMessageC and present all of its interfaces with a differ-

ent name. Another option could have been for the CC2420 (a radio chip) code to define an ActiveMessageC.

From an OS standpoint, the problem with this approach is dealing with the situation when a platform has

two radio chips: they both define an ActiveMessageC, and since this is a global name, you need some way

to determine which one.

ActiveMessageC uses the other configuration operator, which exports interfaces. While the oper-

=, ->

ator maps between the interfaces of components that a configuration names, the = operator maps between

a configuration’s own interfaces and components that it names, exporting interfaces of components within

the configuration out to the configuration’s namespace. Take, for example, RandomC, the component that

defines the standard TinyOS random number generator:

3 Don’t worry about the array brackets on AMSend, Receive, and Snoop: they represent what’s called a parameterized interface,

which is covered in a later chapter. 57

configuration RandomC {

provides interface Init;

provides interface ParameterInit<uint16_t> as SeedInit;

provides interface Random as Random;


implementation {

components RandomMlcgC;

components MainC;

Init = RandomMlcgC; // Allow for re-initialization

MainC.SoftwareInit -> RandomMlcgC; // Auto-initialize

SeedInit = RandomMlcgC;

Random = RandomMlcgC;

} Listing 5.7: The RandomC configuration

Mlgc stands for MLCG (multaplicative linear congruential generator). In the default case, RandomC is a

wrapper around RandomMlcgC. There’s another implementation, RandomLfsrC, that is about twice as fast

but produces not nearly as good random numbers. Platforms or applications that need to use RandomLfsrC

can redefine RandomC to encapsulate RandomLfsrC instead.

RandomMlcgC provides the Init interface. Calling RandomMlcgC.Init.init seeds the random number

generator with the node’s local address. The SeedInit interface allows a component to start the generator

with a specific seed. RandomC wires RandomMlcgC.Init in two different ways:

Init = RandomMlcgC; // Allow for re-initialization

MainC.SoftwareInit -> RandomMlcgC; // Auto-initialize

Listing 5.8: RandomC’s wiring of Init for auto–initialization and re–initialization

In the first, it equates its own RandomC.Init with RandomMlcg.Init. If a component calls RandomC.Init.init(),

it actually calls RandomMlcgC.Init.init(). In the second, RandomC wires RandomMlcgC.Init to the TinyOS

boot sequence (MainC). When TinyOS boots, it calls MainC.SoftwareInit.init (see TEP 107 for the full boot

sequence), and so it calls RandomMlcgC.Init.init(). This means that, before an application starts, RandomC

has made sure that its underlying random number generator has been properly seeded. If the application re-

seeds it by calling RandomC.Init or RandomC.SeedInit, no harm is done. But by wiring to MainC, RandomC

makes sure that an application (or protocol, or system) doesn’t have to remember to initialize RandomC.

This technique — “auto-wiring” initialization — is used in many T2 abstractions. One very common bug

in TinyOS 1.x is to forget to initialize. This usually happens because components might be initializing


the same component. This approach is wasteful, but since initialization only happens once, it’s not a huge


issue. The bigger issue is that a component often relies on someone else initializing. For example, imagine

two radio stacks, A and B. A initializes the timer system, B does not. A programmer writes an application

using radio stack A and forgets to initialize the timer system. Because radio stack A does, everything works

fine. The programmer then decides to switch to radio stack B, and nothing works: neither the application

nor the stack initialize the timers, and so the system just hangs. For software initialization – setting fields,

etc. – generally doesn’t matter (Init is not supposed to call anything besides Init). Hardware initialization

is a much trickier problem, and is generally handled on a per-platform basis. Refer to TEP 107 for more


Programming Hint 8: In the top-level configuration of a software abstraction, auto-wire Init to MainC.

This removes the burden of wiring Init from the programmer, which removes unnecessary work from the

boot sequence and removes the possibility of bugs from forgetting to wire.

From an implementation standpoint, the two configuration operators have two very different purposes.

The operator defines how a configuration’s interfaces are implemented. Like a module, a configuration


is an abstractions defined by a signature. A module directly implements the functions it needs to (events

from its used interfaces, commands from its provided interfaces). A configuration, in contrast, delegates

the implementation to another component using the operator. For example, RandomC delegates the im-


plementation of the Random interface to RandomMlcgC. In contrast, the operator combines existing


components, completing existing signatures.

TinyOS component names all end in either C or P. C stands for Component and means that it represents

a usable abstraction. P stands for Private, and generally means that you shouldn’t wire to it: instead, there’s

usually a C that encapsulates it in some way to make it useful. Once you have written the signature for a

C component, changing it is very hard: any number of other components might depend on it, and chang-

ing it will cause compilation errors. In contrast, because a P component is only wired to by higher-level

configurations within that software abstraction, their signatures are much more flexible. E.g., changing the

signature of AMSenderC would break almost all TinyOS code, but an internal change to CC2420ReceiveP

(and changing its wiring in CC2420ReceiveC) should not be apparent to the user.

The distinction between C (an externally usable abstraction) and P (an internal implementation) is par-

ticularly important in nesC because of the component model. In languages such as C, an implementation

can directly reference what it depends on (e.g., library calls). In nesC, a configuration needs to resolve those

dependencies. 59

Programming Hint 9: If a component is a usable abstraction by itself, its name should end with C. If it is

intended to be an internal and private part of a larger abstraction, its name should end with P. Never wire to

P components from outside your package (directory).

Let’s look at a complete (but very simple) example of how all of these issues can be resolved: Ran-

domC. As mentioned above, RandomC is name for the standard TinyOS random number generator. It is a

configuration with this signature:

configuration RandomC {

provides interface Init;

provides interface ParameterInit<uint16_t> as SeedInit;

provides interface Random as Random;

} Listing 5.9: The RandomC signature

The default implementation of RandomC lives in tos/system. As shown above, maps RandomC to a

specific implementation, RandomMlcgC, while auto-wiring to the boot sequence. RandomMlcgC is itself a

(trivial) configuration:

configuration RandomMlcgC {

provides interface Init;

provides interface ParameterInit<uint16_t> as SeedInit;

provides interface Random as Random;


implementation {

components RandomMlcgP;

Init = RandomMlcgP;

SeedInit = RandomMlcgP;

Random = RandomMlcgP;

} Listing 5.10: The RandomMlcgC signature

RandomMlcgC represents a complete random number generator abstraction that is multiplicative linear

congruential generator (MLCG). RandomMlcgP is a particular implementation of such a generator. In this

case, it’s completely in software. A platform that has a hardware random number generator could have a

different RandomMlcgP. Because this different implementation might have a different signature — e.g., it

might require accessing registers through an HPL — it also requires a different RandomMlcgC that resolves

these dependencies to present a complete abstraction.

In short, the configuration RandomC maps the standard number generator to a specific algorithm, Ran-


domMlcgC. The configuration RandomMlcgC encapsulates a specific implementation as a complete ab-

straction. RandomMlcgP is an implementation of the multiplicative linear congruential generator. Similarly,

there is also a RandomLfsrC, which is a linear feed shift register random number generator. RandomLfsrC

is a configuration that just exports the interfaces of RandomLfsrP, the software implementation. This hier-

archy of names means that a system can wire to a specific random number generator if it cares which one

it uses, or wire to the general one that TinyOS provides (RandomC). An application can change what the

default random number generator is by defining its own RandomC, which maps to a different algorithm.

5.0.1 The as keyword and other namespace tricks

Components sometimes name two instances of an interface in their signature:

// A greatly elided signature of ActiveMessageC

configuration ActiveMessageC {

provides interface Receive[am_id_t];

provides interface Receive as Snoop[am_id_t];

} Listing 5.11: A truncated ActiveMessageC configuration

The as keyword allows you to rename an interface in a signature. The Snoop interface above, for

example, is still of type Receive: you can wire any “uses interface Receive” to it. However, its name allows

you to distinguish between Snoop (packets not destined for the local node) and Receive (packets destined

for you). Technically, the statement

uses interface StdControl; Listing 5.12: Using StdControl

is actually

uses interface StdControl as StdControl;

Listing 5.13: The implicit as keyword in uses and provides

That is, the first StdControl is the type, and the second is the name. Because this is so common, nesC

allows you to use the shorthand.

The as keyword can also be used within configurations. Because nesC components are in a global

namespace, sometimes they have very long and descriptive names. For example, the lowest level (byte)

SPI bus abstraction on the Atmega128 is HplAtm128SpiP, which means, “This is the private hardware


presentation layer component of the Atmega128 SPI bus.” Typing that in a configuration is a real pain, and

it’s not very easy to read. So, the slightly higher level abstraction, the configuration Atm128SpiC, names it

like this:

HplAtm128SpiC as HplSpi;

Listing 5.14: Using the as keyword with components

which makes the wiring a good deal more comprehensible. Similarly, CC2420ReceiveC, the receive

path of the CC2420 radio, is a configuration that wires packet logic to things like interrupts and status pins:

configuration CC2420ReceiveC {...}

implementation {

components CC2420ReceiveP;

components new CC2420SpiC() as Spi;

components HplCC2420PinsC as Pins;

components HplCC2420InterruptsC as InterruptsC;

// rest of the implementation elided

} Listing 5.15: CC2420ReceiveC’s use of the as keyword

Because all interfaces are types, when wiring you can sometimes elide one of the interface names.

You’ve actually seen this a lot in the previous examples, such as RandomC:

MainC.SoftwareInit -> RandomMlcgC; // Auto-initialize

Listing 5.16: Autoinitializing RandomMlcgC

On the left side, MainC.SoftwareInit is an instance of the Init interface. On the right side is RandomMl-

cgC, without an interface name. Because RandomMlcgC only provides one instance of the Init interface,

nesC assumes that this is the one you mean. So technically, this line is

MainC.SoftwareInit -> RandomMlcgC.Init;

Listing 5.17: Expanding implicit interface wiring

If it’s an export wiring, then the component name is implicit on one side, so you always have to name

the interface. For example,

Init = RandomMlcgC; // Allow for re-initialization

Listing 5.18: Enabling re–initialization of RandomMlcgC


means “wire Init of this component to the interface of type Init of RandomMlcgC.” This form of short-

hand works in terms of types, not names. It would work just as well if RandomMlcgC provided Init as

“RandomInit”. However, you can’t do this:

= RandomMlcgC.Init; Listing 5.19: An illegal wiring

If a component has two instances of the same interface, then you have to name which instance you mean.

For example, this is ActiveMessageC for the telos platforms:

configuration ActiveMessageC {

provides {

interface Init;

interface Receive[uint8_t id];

interface Receive as Snoop[uint8_t id];



implementation {

components CC2420ActiveMessageC as AM;

Init = AM;


Receive = AM.Receive;

Snoop = AM.Snoop;


} Listing 5.20: Using the as keyword to distinguish interface instances

Because CC2420ActiveMessageC provides two instances of the Receive interface, ActiveMessageC has

to name them. Basically, wiring has to be precise and unambiguous, but if nesC lets you use shorthand in

the common cases of redundancy.

The as keyword make code more readable and comprehensible. Because there is a flat component

namespace, some components have long and complex names which can be easily summarized, and using the

as keyword with interfaces can add greater semantic information on the role of that interface. Additionally,

by using the as keyword, you create a level of indirection. E.g., if a configuration uses the as keyword

to rename a component, then changing the component only requires changing that one line. Without the

keyword, you have to change every place it’s named in the configuration.

Programming Hint 10: Use the “as” keyword liberally.


5.1 Pass Through Wiring

Sometimes you don’t want a configuration to specify the endpoint of an interface. Instead, you need a

configuration to act as a renaming mechanism, or as a thin shim which interposes on some (but not all) of

the interfaces of a given abstraction. You don’t want the component using the shim component to know

which are interposed. This practice is very rare in TinyOS 2.0 (there isn’t a single instance of it in the core),

but it was used some times in 1.x, and so it’s here for completeness sake.

Pass through wiring is when a configuration wires two interfaces in its signature together. It must wire a

uses to a provides, and it does so with the = operator. For example, this is a configuration that does nothing

except introduce a name change on the interface:

configuration NameChangeC {

provides interface Send as SpecialSend;

uses interface Send as SubSend;


implementation {

SpecialSend = SubSend;

} Listing 5.21: Using a pass–through wiring to rename an abstraction

A component that wires to NameChangeC.SpecialSend wires to whatever NameChangeC.SubSend has

been wired to.

5.1.1 Multiple Wirings, Fan-in, and Fan-out

Not all wirings are one-to-one. For example, this is part of the component CC2420TransmitC, a configura-

tion that encapsulates the transmit path of the CC2420 radio (there’s also a CC2420ReceiveC):

configuration CC2420TransmitC {

provides interface Init;

provides interface AsyncControl;

provides interface CC2420Transmit;

provides interface CSMABackoff;

provides interface RadioTimeStamping;


implementation {

components CC2420TransmitP;

components AlarmMultiplexC as Alarm;

Init = Alarm;

Init = CC2420TransmitP;

// further wirings elided



Listing 5.22: Fan–out on CC2420TransmitC’s Init

This wiring means that CC2420TransmitC.Init maps both to Alarm.Init and CC2420TransmitP.Init.

What does that mean? There certainly isn’t any analogue in C-like languages. In nesC, a multiple-wiring

like this means that when a component calls CC2420TransmitC.Init.init(), it calls both Alarm.Init.init() and

CC2420TransmitP.Init.init(). The order of the two calls is not defined.

This ability to multiply wire might seem strange. In this case, you have a single call point, CC2420TransmitC.Init.init,

which fans-out to two callees. There are also fan-ins, which are really just a fancy name for “multiple people

call the same function.” But the similarity of the names “fan-in” and “fan-out” is important, as nesC inter-

faces are bidirectional. For example, coming from C, wiring two components to RandomC.Random doesn’t

seem strange: two different components might need to generate random numbers. In this case, as Random

only has commands, all of the functions are fan-in, as there are multiple callers for a single callee, just like

a library function.

But as nesC interfaces are bidirectional. if there is fan-in on the command of an interface, then when that

component signals an event on the interface, there are multiple callees. Take, for example, the power control

interfaces, StdControl and SplitControl. StdControl is single-phase: it only has commands. SplitControl, as

its name suggests, is split-phase: the commands have completion events. In this wiring,

components A, B, C;

A.StdControl -> C;

B.StdControl -> C; Listing 5.23: Fan–in on StdControl

Then either A or B can call StdControl to start or stop C. However, in this wiring, there are also com-

pletion events:

components A, B, C;

A.SplitControl -> C;

B.SplitControl -> C; Listing 5.24: Fan–out and fan–in on SplitControl

Either A or B can call SplitControl.start. When C issues the SplitControl.startDone() event, though,

both of them are wired to it, so both A.SplitControl.startDone and B.SplitControl.startDone are called. The


implementation has no way of determining which called the start command.

4 There are ways to disambiguate this, through parameterized interfaces, which are covered in the next chapter.


Interfaces are not a one-to-one relationship. Instead, they are an n-to-k relationship, where n is the

number of users and k is the number of providers. Any provider signaling will invoke the event handler on

all n users, and any user calling a command will invoke the command on all k providers.

Anecdote: Historically, multiple wirings come from the idea that TinyOS components can be thought

of as hardware chips. In this model, an interface is a set of pins on the chip. The term wiring comes

from this idea: connecting the pins on one chip to those of another. In hardware, though, you can easily

connect N pins together. For example, a given GPIO pin on a chip might have multiple possible triggers,

or a bus have multiple end devices that are controlled with chip select pins. It turns out that taking this

metaphor literally has several issues. When TinyOS moved to nesC, these problems were done away with.

Specifically, consider this configuration:

configuration A {

uses interface StdControl;


configuration B {

provides interface StdControl;

uses interface StdControl as SubControl; // Called in StdControl


configuration C {

provides interface StdControl;


A -> B.StdControl;

A -> C.StdControl;

B.SubControl -> C;

Listing 5.25: Why the metaphor of “wires” is only a metaphor

If you take the multiple wiring metaphor literally, then the wiring of B to C joins it with the wiring of

A to B and C. That is, they all form a single “wire.” The problem is that B’s call to C is the same wire as

A’s call to B. B enters an infinite recursion loop, as it calls SubControl, which calls StdControl, which calls

SubControl, and so on and so on. Therefore, nesC does not take the metaphor literally. Instead, the wirings

from one interface to another are considered separately. So the code

A -> B.StdControl;

A -> C.StdControl;

B.SubControl -> C; Listing 5.26: How nesC handles multiple wirings


Makes it so that when A calls StdControl.start it calls B and C, and when B calls SubControl.start it calls

C. In practice, multiple wirings allow an implementation to be independent of the number of components it

depends on. Consider, for example, MainC, which presents the abstraction of the boot sequence to software


configuration MainC {

provides interface Boot;

uses interface Init as SoftwareInit;

} Listing 5.27: MainC

It only has two interfaces. The first, SoftwareInit, it calls when booting so that software components

which need so can be sure they’re initialized before execution begins. The second, Boot, signals an event

once the entire boot sequence is over. Many components need initialization. For example, in the very simple

application RadioCountToLeds, there are ten components wired to MainC.SoftwareInit. Rather than use

many Init interfaces and call them in some order, MainC just calls SoftwareInit once and this call forks out


to all of the components that have wired to it.

5.2 Combine Functions

Fan-out raises an interesting question: if

call SoftwareInit.init() Listing 5.28: Calling SoftwareInit.init()

actually calls ten different functions, then what is its return value?

nesC provides the mechanism of combine functions to specify the return value. A data type can have an

associated combine function. Because a fan-out always involves calling N functions with identical signa-

tures, the caller’s return value is the result of applying the combine function to the return values of all of the

callees. When nesC compiles the application, it autogenerates a fan-out function which applies the combine


5 Another approach could have been to use a parameterized interface (covered in the next chapter), but as the calls to Init are

supposed to be very self-contained, the idea is that the increased complexity wouldn’t be worth it.


For example, error t’s combine function is ecombine (defined in types/TinyError.h):

error_t ecombine(error_t e1, error_t e2) {

return (e1 == e2)? e1: FAIL;

} Listing 5.29: The combine function for error t

If both calls return the same value, ecombine returns that value. Otherwise, as only one of them could

be SUCCESS, it returns FAIL. This combine function is bound to error t with a C attribute:

typedef uint8_t error_t __attribute__((combine(ecombine)));

Listing 5.30: Associating a combine function with a type

When asked to compile the following configuration

configuration InitExample {}

implementation {

components MainC;

components AppA, AppB;

MainC.SoftwareInit -> AppA;

MainC.SoftwareInit -> AppB;

} Listing 5.31: Fan–out on SoftwareInit


ncc will generate something like the following code :

error_t MainC\$SoftwareInit\$init() {

error_t result;

result = AppA\$SoftwareInit\$init();

result = ecombine(result, AppB\$SoftwareInit\$init());

return result;

} Listing 5.32: Resulting code from fan–out on SoftwareInit

Some return values don’t have combine functions, either due to programmer oversight or the semantics

of the data type. Examples of the latter include things like data pointers: if both calls return a pointer, say,

to a packet, there isn’t a clear way to combine them into a single pointer. If your program has fan-out on a

call whose return value can’t be combined, the nesC compiler will issue a warning along the lines of

6 The nesC compiler actually compiles to C, which it then passes to a native C compiler. Generally, it uses $ as the delimiter

between component, interface, and function names. Because nesC does not allow $, this allows the compiler to enforce component

encapsulation (there’s no way to call a function with a $ from within nesC and break the component boundaries).


“calls to Receive.receive in CC2420ActiveMessageP are uncombined”


“calls to Receive.receive in CC2420ActiveMessageP fan out, but there is no combine function fpecified

for the return value.”

Programming Hint 11: Never ignore combine warnings.

Chapter 6

Parameterized Wiring

Sometimes, a component wants to provide many instances of an interface. For example, the basic timer


implementation component HilTimerMilliC doesn’t provide just one timer: it needs to provide many timers.

One way it could do so is by having a long signature:

configuration HilTimerMilliC {

provides interface Timer<TMilli> as Timer0;

provides interface Timer<TMilli> as Timer1;

provides interface Timer<TMilli> as Timer2;

provides interface Timer<TMilli> as Timer3;


provides interface Timer<TMilli> as Timer100;

} Listing 6.1: Timers without parameterized interfaces

While this works, it’s kind of a pain and leads to a lot of repeated code. Every instance needs to have

its own implementation. That is, there will be 100 different startPeriodic functions, even though they’re

almost completely identical. Another approach could be to have a call parameter to the Timer interface that

specifies which timer is being changed, sort of like a file descriptor in POSIX file system calls. In this case,

HilTimerMilliC would look like this

configuration HilTimerMilliC {

provides interface Timer;

} Listing 6.2: Timers with a single interface

1 See TEP 102 for details. 69


Components that use timers would have some way of generating unique timer identifiers, and would

pass them in every call:

call Timer.startPeriodic(timerDescriptor, 1024); // Fire at 1Hz

Listing 6.3: Starting a timer with a run–time parameter

While this approach works it doesn’t lead to multiple implementations passing the parameter is gen-

erally unnecessary, in that components generally allocate some number of timers and then only use those

timers. That is, the set of timers a component uses and the size of the set are generally known at compile

time. Making the caller pass the parameter at runtime is therefore unnecessary, and could possibly introduce

bugs (e.g., if it were, due to laziness, stored in a variable).

There are other situations when a component wants to provide a large number of interfaces, such as

communication. Active messages have an 8-bit type field, which is essentially a protocol identifier. In the


Internet, the valid protocol identifiers for IP are well specified , and many port numbers for TCP are well

established. When a node receives an IP packet with protocol identifier 6, it knows that this is a TCP packet

and dispatches it to the TCP stack. Active messages need to perform a similar function, albeit without the

standardization of IANA: a network protocol needs to be able to register to send and receive certain AM

types. Like timers, with basic interfaces there are two ways to approach this: code redundancy or run-time

parameters. That is, you could either have a configuration like this

configuration NetworkProtocolC {...}

implementation {

components NetworkProtocolP, PacketLayerC;

NetworkProtocolP.Send -> PacketLayerC.Send15;

} Listing 6.4: Wiring to AM type 15 by name

or the network protocol code could look like this:

call Send.send(15, msg, sizeof(payload_t));

Listing 6.5: Calling AM type 15 with a compile–time parameter

Neither of these solutions is very appealing. The first leads to a lot of redundant code, wasting code

memory. Also, as the wiring is by name, it is also difficult to wire to. That is, there is no way to manipulate

constants in order to control the wiring. For example, if a sensor filter and a routing stack both wire to

2 71

Timer3, there’s no way to separate them without changing the code text of one of them to read “Timer4.”

One way to manage the namespace would be to have components leave their timers unwired and then

expect the application to resolve all of them. But this places a large burden on an application developer. For

example, a small application that builds on top of a lot of large libraries might have to wire eight different

timers. Additionally, it means that the components it includes aren’t self-contained, working abstractions:

they have remaining dependencies that an application developer needs to resolve.

The second approach is superior to the first at first glance, but it turns out to have even more significant

problems. First, in many cases the identifier is a compile-time constant. Requiring the caller to pass it as a

run-time parameter is unnecessary and is a possible source of bugs. Second, and more importantly, it pushes

identifier management into the caller. For example, let’s return to the timer example:

call Timer.startPeriodic(timerDescriptor, 1024); // Fire at 1Hz

Listing 6.6: Starting a timer with a run–time parameter

From the calling component’s perspective, it doesn’t care which timer it’s using. All it cares is that it

has its own timer. By making the identifier part of the call, this forces the module to know (and manage)

the name of the identifier. The third and largest problem, however, isn’t with calls out to other components:

it’s with calls in from other components. In Timer, for example, how does the timer service signal a fired()

event? Because the identifier is a runtime parameter, the only way is for Timer.fired() fan-out to all timers,

and have them all check the identifier.

To support abstractions that have sets of interfaces, nesC has parameterized interfaces. You’ve seen them

in a few of the earlier example signatures. A parameterized interface is essentially an array of interfaces,

and the array index is the parameter. For example, this is the signature of ActiveMessageC:

configuration ActiveMessageC {

provides {

interface Init;

interface SplitControl;

interface AMSend[uint8_t id];

interface Receive[uint8_t id];

interface Receive as Snoop[uint8_t id];

interface Packet;

interface AMPacket;

interface PacketAcknowledgements;




Listing 6.7: ActiveMessageC signature

AMSend, Receive, and Snoop are all parameterized interfaces. Their parameter is the AM type of the

message (the protocol identifier). Normally, components don’t wire directly to ActiveMessageC. Instead,


they use AMSenderC, AMReceiverC, and the other virtualized abstractions. However, there are some

test applications for the basic AM abstraction, such as TestAM. The module TestAMC sends and receives


module TestAMC {

uses {


interface Receive;

interface AMSend;



} Listing 6.8: Signature of TestAMC

TestAMAppC is the configuration that wires up the TestAMC module:

configuration TestAMAppC {}

implementation {

components MainC, TestAMC as App;

components ActiveMessageC;

MainC.SoftwareInit -> ActiveMessageC;

App.Receive -> ActiveMessageC.Receive[240];

App.AMSend -> ActiveMessageC.AMSend[240];


} Listing 6.9: Wiring TestAMC to ActiveMessageC

Note that TestAM has to wire SoftwareInit to ActiveMessageC because it doesn’t use the standard

abstractions, which auto-wire it. This configuration means that when TestAMC calls AMSend.send, it

calls ActiveMessageC.AMSend number 240, so packets with protocol ID 240. Similarly, TestAMC receives

packets with protocol ID 240. Because these constants are specified in the configuration, they are not bound

in the module: from the module’s perspective, they don’t even exist. That is, from TestAMC’s perspective,

these two lines of code are identical:

3 See TEP 116: Packet Protocols, for details. 73

TestAMC.AMSend -> ActiveMessageC.AMSend240; // Not real TinyOS code

TestAMC.AMSend -> ActiveMessageC.AMSend[240];

Listing 6.10: Wiring to a single interface versus an instance of a parameterized interface

The different lies in the component with the parameterized interface. The parameter is essentially an-

other argument in functions of that interface. In ActiveMessageC.AMSend, for example, the parameter is

an argument passed to it in calls to send() and which it must pass in signals of sendDone(). But the parame-

terized interface gives you two key things. First, it automatically fills in this parameter when TestAMC calls

send (nesC generates a stub function to do so, and inlining makes the cost negligible). Second, it automat-

ically dispatches on the parameter when ActiveMessageC signals sendDone (nesC generates a switch table

based on the identifier).

In reality, ActiveMessageC is a configuration that encapsulates a particular chip, such as CC2420ActiveMessageC,

which encapsulates that chip’s implementation, such as CC2420ActiveMessageP:

module CC2420ActiveMessageP {

provides {

interface AMSend[am_id_t id];



} Listing 6.11: A possible module underneath ActiveMessageC

Within CC2420ActiveMessageP, this is what the parameterized interface looks like:

command error_t AMSend.send[am_id_t id](am_addr_t addr, message_t* msg, uint8_t len) {

cc2420_header_t* header = getHeader( msg );

header->type = id;


} Listing 6.12: Parameterized interface syntax

The interface parameter precedes the function argument list, and the implementation can treat it like any

other argument. Basically, it is a function argument that the nesC compiler fills in when components are

composed. When CC2420ActiveMessageP wants to signal sendDone, it pulls the protocol ID back out of

the packet and uses that as the interface parameter:

event void SubSend.sendDone(message_t* msg, error_t result) {

signal AMSend.sendDone[call AMPacket.type(msg)](msg, result);



Listing 6.13: Dispatching on a parameterized interface

If the AM type of the packet is 240, then the dispatch code nesC generates will cause this line of code to

signal the sendDone() wired to ActiveMessageC.AMSend[240], which in this case is TestAMC.AMSend.sendDone.

CC2420ActiveMessageP.Receive looks similar to sendDone. The AM implementation receives a packet

from a lower level component and dispatches on the AM type to deliver it to the correct component. Depend-

ing on whether the packet is destined to the local node, it signals either Receive.receive or Snoop.receive:

event message_t* SubReceive.receive(message_t* msg, void* payload, uint8_t len) {

if (call AMPacket.isForMe(msg)) {

return signal Receive.receive[call AMPacket.type(msg)](msg, payload, len - CC2420_SIZE);

} else {

return signal Snoop.receive[call AMPacket.type(msg)](msg, payload, len - CC2420_SIZE);



Listing 6.14: How active message implementations decide on whether to signal to Receive or Snoop

The subtraction of CC2420 SIZE is because the lower layer has reported the entire size of the packet,

while to layers above AM the size of the packet is only the data payload (the entire size minus the size of


headers and footers, that is, CC2420

Parameterized interfaces get the best of both worlds. Unlike the name-based approach (e.g. Send240)

described above, there is a single implementation of the call. Additionally, since the parameter is a value,

unlike a name it can be configured and set. E.g., a component can do something like this:


\#define ROUTING_TYPE 201


RouterP.AMSend -> PacketSenderC.AMSend[ROUTING_TYPE];

Listing 6.15: Defining a parameter

It also avoids the pitfalls of the runtime parameter approach. Because the constant is set at compile-time,

nesC can automatically fill it in and dispatch based on it, simplifying the code and improving the efficiency

of outgoing function invocations.

Note that you can also wire entire parameterized interfaces:

configuration CC2420ActiveMessageC {

provides interface AMSend[am_id_t id];

6.1. DEFAULTS 75

} {...}

configuration ActiveMessageC {

provides interface AMSend[am_id_t id];


implementation {

components CC2420ActiveMessageC;

AMSend = CC2420ActiveMessageC;

} Listing 6.16: Wiring full parameterized interface sets

Programming Hint 12: If a function has an argument which is one of a small number of constants, consider

defining it as a few separate functions to prevent bugs. If the functions of an interface all have an argument

that’s almost always a constant within a large range, consider using a parameterized interface to save code

space. If the functions of an interface all have an argument that’s a constant within a large range but only

certain valid values, implement it as a parameterized interface but expose it as individual interfaces, to both

minimize code size and prevent bugs.

Parameterized interfaces aren’t limited to a single parameter. For example, this is valid code:

provides interface Timer[uint8\_t x][uint8\_t y];

In practice, however, this leads to large and inefficienct code (nested swich statements), and so compo-

nents rarely (if ever) use it.

6.1 Defaults

Because a module’s call points are resolved in configurations, a common compile error in nesC is to forget

to wire something. The equivalent in C is to forget to include a library in the link path, and in Java it’s

to include a jar. Usually, a dangling wire represents a bug in the program. With parameterized interfaces,

however, often they don’t.

Take, for example, the Receive interface of ActiveMessageC. Most applications receive a few AM types,

maybe 15 at most: they don’t respond to or use every protocol ever developed. However, there’s these call

in CC2420ActiveMessageP:

return signal Receive.receive[call AMPacket.type(msg)](msg, payload, len - CC2420_SIZE);

Listing 6.17: Signaling Receive.receive


On one hand, if all of the nodes in the network run the same executable, it’s possible that none of them

will ever send a packet of, say, AM type 144. However, if there are other nodes nearby, or if packets are

corrupted in memory before being sent (or after being received), then it’s very possible that a node which

doesn’t care about protocol 144 will receive a packet of this type. Therefore nesC expects the receive event

to have a handler: it needs to execute a function when this happens. But the application doesn’t wire to

Receive[144], and making a developer wire to all of the unwired instances is unreasonable, especially as

they’re all null functions (in the case of Receive.receive, the handler just returns the packet passed to it).

nesC therefore has default handlers. A default handler is an implementation of a function that’s used

if no implementation is wired in. If a component wires to the interface, then that implementation is used.

Otherwise, the call (or signal) goes to the default handler. For example, CC2420ActiveMessageP has the

following default handlers:

default event message_t* Receive.receive[am_id_t id](message_t* msg, void* payload, uint8_t len) {

return msg;


default event message_t* Snoop.receive[am_id_t id](message_t* msg, void* payload, uint8_t len) {

return msg;


default event void AMSend.sendDone[uint8_t id](message_t* msg, error_t err) {


} Listing 6.18: Default events in an active message implementation

In the TestAM application, TestAMAppC wires TestAMC to ActiveMessageC.Receive[240]. There-

fore, on the telos or micaz platform, when the radio receives a packet of AM type 240, it signals Tes-

tAMC.Receive.receive(). Since the application doesn’t use any other protocols, when it receives an active

message of any other type it signals CC2420ActiveMessageP’s default handler.

Default handlers are dangerous, in that using them improperly can cause your code to stop working.

For example, while CC2420ActiveMessageP has a default handler for Send.sendDone, TestAMC does not

have a default handler for Send.send. Otherwise, you could forget to wire TestAMC.Send and the program

would compile fine. That is, defaults should only be used when an interface is not necessary for the proper

execution of a component. This almost always involves parameterized interfaces, as it’s rare that all of the

parameter values are used.


6.2 unique() and uniqueCount()

Parameterized interfaces were originally intended to support abstractions like active messaging. It turns out,

however, that they are much more powerful than that. If you look at the structure of most basic TinyOS

2.0 abstractions, there’s a parameterized interface in there somewhere. The ability to specify compile-time

constants outside of modules, combined with dispatch, means that we can use parameterized interfaces to

distinguish between many different callers. A component can provide a service through a parameterized

interface, and every client that needs to use the service can wire to a different parameter ID. For split-phase

calls, this means that you can avoid fan-out on the completion event. Consider these two examples:

components RouterC, SourceAC, SourceBC;

SourceAC.Send -> RouterC;

SourceBC.Send -> RouterC; Listing 6.19: Single–wiring a Send


components RouterC, SourceAC, SourceBC;

SourceAC.Send -> RouterC.Send[0];

SourceBC.Send -> RouterC.Send[1];

Listing 6.20: Wiring to a parameterized Send

In both cases, SourceAC and SourceBC can call Send.send. In the first case, when RouterC signals

Send.sendDone, that signal will fan-out to both SourceAC and SourceBC, who will have to determine —

by examining the message pointer, or internal state variables — whether the event is intended for them or

someone else. In the second case, however, if RouterC keeps the parameter ID passed in the call to Send,

then it can signal the appropriate completion event. E.g., SourceBC calls Send.send, RouterC stores the ID

1, and when it signals sendDone it signals it on Send.sendDone[1](...).

Let’s return to the timer example, where this abstraction is particularly powerful. The timer component

HilTimerMilliC has the following signature:

configuration HilTimerMilliC {

provides interface Timer<TMilli>[uint8_t];

} Listing 6.21: Partial HilTimerMilliC signature


Because Timer is parameterized, many different components can wire to separate interface instances.

When a component calls Timer.startPeriodic, nesC fills in the parameter ID, which the timer implementation

can use to keep track of which timer is being told to start. Similarly, the timer implementation can signal

Timer.fired on specific timer instances.

For things such as network protocols, where the parameter to an interface is a basis for communica-

tion and interoperability, the actual parameter used is important. For example, if you have two different

compilations of the same application, but one wires a protocol with

RouterC.Send -> ActiveMessageC.Send[210];

Listing 6.22: Wiring to Send instance 210

while the other wires it with

RouterC.Send -> ActiveMessageC.Send[211];

Listing 6.23: Wiring to Send instance 211

then they will not be able to communicate. In these cases, the parameter used is shared across nodes,

and so needs to be globally consistent. Similarly, if you had two protocols wire to the same AM type, then

this is a basic conflict that an application developer is going to have to resolve. Generally, protocols use

named constants (enums) to avoid these kinds of typos.

With timers and the Send client example above, however, there is no such restriction. The parameter

represents a unique client ID, rather than a piece of shared data. A client doesn’t care which timer it wires

to, as long as it wires to one that nobody else does. For this case, rather than force clients to guess Ids and

hope there is no collision, nesC provides a special compile-time function, unique().

It is a compile-time function because it is resolved at compile time. When nesC compiles an application,

it transforms all calls to unique() into an integer identifier. The unique function takes a string key as an

argument, and promises that every instance of the function with the same key will return a unique value.

Two calls to unique with different keys can return the same value. So if two components, AppOneC and

AppTwoC, both want timers, they could do this

AppOneC.Timer -> HilTimerMilliC.Timer[unique("Timer")];

AppTwoC.Timer -> HilTimerMilliC.Timer[unique("Timer")];

Listing 6.24: Generating a unique parameter for the HilTimerMilliC’s Timer interface



and be assured that they will have distinct timer instances. If there are n calls to unique, then the unique

values will be in the range of 0 – (n-1).

The combination of parameterized interfaces and the unique function allow services to provide a limited

form of isolation between their clients (i.e., no fan-out on completion events). For example, in TinyOS 2.0,

there are several situations when more than one component wants to access a shared resource. For timing

reasons, fully virtualizing the resource (i.e., using a request queue) isn’t feasible. Instead, the components

need to be able to request the resource and know when it has been granted to them. TinyOS provides this

mechanism through the Resource interface:

interface Resource {

async command error_t request();

async command error_t immediateRequest();

event void granted();

async command void release();

async command uint8_t getId();

} Listing 6.25: The Resource Interface

A component can request the resource either through request() or requestImmediate(). The latter returns

SUCCESS only if the user was able to acquire the resource at that time and otherwise does nothing (it is a

single-phase call). The request() call, in contrast, is split-phase, with the granted indicating that it is safe to

use the resource.

Resource is for when multiple clients need to share the component. So TinyOS has Arbiters, which are

components that institute a sharing policy between different clients. An arbiter providers a parameterized

Resource interface:

configuration FcfsArbiterC {

provides interface Resource[uint8_t id];


} Listing 6.26: The First–Come–First–Served arbiter

Each client wires to a unique instance of the Resource interface, and the arbiter uses the client Ids to

keep track of who has the resource.

4 In practice, clients rarely call unique() directly. Instead, these calls are encapsulated inside generic components, which are

discussed in the next chapter. One common problem with unique() encountered in TinyOS 1.x is that a mistyped key will generate

a non-unique value and possibly cause very strange behavior.


In these examples – Timer and Resource – there is an additional factor to consider: each client requires

the component to store some amount of state. For example, arbiters have to keep track of which clients have

pending requests, and timer systems have to keep track of the period of each timer, how long until it fires,

and whether it’s active. Because the calls to unique define the set of valid client Ids, nesC has a second

compile-time function, uniqueCount(). This function also takes a string key. If there are n calls to unique

with a given key (returning values 0...n-1), then uniqueCount returns n, and this is resolved at compile-time.

Being able to count the number of unique clients allows a component to allocate the right amount of state

to support them. Early versions of nesC didn’t have the uniqueCount function: components were forced to

allocate a fixed amount of state. If there were more clients than the state could support, one or more would

fail at runtime. If there were fewer clients than the state could support, then there was wasted RAM. Because

a component can count the number of clients and know the set of client Ids that will be used, it can promise

that each client will be able to work and use the minimum amount of RAM needed. Returning to the timer

example from above:

AppOneC.Timer -> HilTimerMilliC.Timer[unique("Timer")];

AppTwoC.Timer -> HilTimerMilliC.Timer[unique("Timer")];

Listing 6.27: Wiring to HilTimerMilliC with unique parameters

and HilTimerMilliC could allocate state for each client:

timer_t timers[uniqueCount("Timer")];

Listing 6.28: Counting how many clients have wired to HilTimerMilliC

Assuming the above two were the only timers, then HilTimerMilliC would allocate two timer struc-

tures. If we assume that AppOneC.Timer was assigned ID 0 and AppTwoC.Timer was assigned ID 1, then

HilTimerMilliC can directly use the parameters as an index into the state array. This isn’t how HilTimer-

MilliC works: it’s actually a bit more complicated, as it uses generic components, which the next chapter


Chapter 7

Generic Components

Generic components (and typed interfaces) are the biggest addition in nesC 1.2. They are, for the most part,

what lead TinyOS 2.0 to be significantly different than 1.x. Normally, components are singletons. That is,

a component’s name is a single entity in a global namespace. When two different configurations reference

MainC, they are both referencing the same piece of code and state. In the world of C++ and Java, a singleton

is similar (but not identical) to a static class.

Generic components are not singletons. They can be instantiated within a configuration. Take, for exam-

ple, something like a bit vector. Many components need and use bit vectors. By having a single component

that provides this abstraction, we prevent bugs by reducing code repetition. If you only have singletons,

then every bit vector has to be a different component, each of which has a separate implementation. By

making a bit vector a generic component, we can write it once and use it many times. This is the signature

of BitVectorC, which we saw in an earlier chapter:

generic module BitVectorC( uint16_t max_bits ) {

provides interface Init;

provides interface BitVector;

} Listing 7.1: The BitVectorC generic module

A configuration instantiates a generic component with the keyword This is constrast to singleton


components, which are just named. Instantiation is private to the configuration, so every time new is used an

instance is created. Instantiating modules copies their code. Therefore, while generic modules can simplify

the source code of an application and prevent bugs from code copying, they do not inherently reduce the

size of the final executable.

For example, the code 81


configuration ExampleVectorC {}

implementation {

components new BitVectorC(10);

} Listing 7.2: Instantiating a BitVectorC

creates a BitVectorC of size 10. This creates a copy of the BitVectorC code where every instance of the

is replaced by the constant 10.

argument bits


Generic components use a code-copying approach for two reasons: simplicity and types. If generic

modules did not use a code-copying approach, then there would be a single copy of the code that works for

all instances of the component. This is difficult when a generic component can take a type as a argument, as

allocation size, offsets, and other considerations can make a truly single copy unfeasible (C++ templates, for

example, create a single copy of code for each template type). Code sharing within identical types requires

adding an argument, such as a pointer, to all of the functions. This argument indicates which instance

is executing. Additionally, all variable accesses would have to offset from this pointer. In essence, the

execution time and costs of functions could change significantly (offsets rather than constant accesses). In

order to provide simple, easy to understand and run-time efficient components, nesC uses a code-copying

approach, sacrificing possible reductions in code size.

A generic component is private to the configuration that instantiates it, as no other configuration can

name it. However, the instantiator can make the generic accessible to other components by exporting its

interfaces. If the exporting configuration is a singleton, then it can provide a globally accessible name to

the component within (the section below on the TinyOS 2.0 timer system describes an example of this


Code-copying applies to configurations as well as modules. One (reasonably accurate) way to think

of a generic component is that it literally creates a copy of the source file. In the case of modules, this is

executable code and variables that will be in the final application. In the case of configurations, however, this

copying can instantiate other components and create wirings. A generic module defines a piece of repeatable

executable logic: a generic configuration defines a repeatable pattern of composition between components.

Generic components differ in syntax from singleton components by having an argument list. Generics

with no arguments have an empty list, just like a function. For example:

configuration TimerMilliImplC {...}

generic configuration TimerMilliC() {...}

module Msp430ClockP {...}


generic module VirtualizeTimerC(typedef precision_tag, int max_timers) {...}

Listing 7.3: Generic Component Syntax

Both generics have a argument list. TimerMilliC takes no arguments, however, so the list is empty.

VirtualizeTimerC, in contrast, takes two arguments, a type (used to check precision) and a count of the

number of timers it should provide. Generic components support three types for arguments:

1. Types: these can be arguments to typed interfaces

2. Numeric constants

3. Constant strings

7.1 Generic Modules

BitVectorC is a generic module with a single argument, a uint16 t. If an argument is a type, then it is

declared with the typedef keyword. For example, HilTimerMilliC is often built on top of a single timer.

Components in the timer library (lib/timer) virtualize the single underlying timer into many timers. The

timer interface, however, has a type as an argument to ensure that the precision requirements are met. This

means that a component which virtualizes timers must have this type passed into it. This is the signature of


generic module VirtualizeTimerC( typedef precision_tag, int max_timers ) {

provides interface Timer<precision_tag> as Timer[ uint8_t num ];

uses interface Timer<precision_tag> as TimerFrom;

} Listing 7.4: The VirtualizeTimerC generic module

It is a generic module with two arguments. The first argument is the timer precision tag, which is a

type and therefore has a type of typedef. In the case of HilTimerMilliC, this is TMilli. This precision tag

provides additional type checking for an interface. Rather than have separate Timer interfaces for different

time fidelities, TinyOS uses a single Timer interface with the fidelity as an argument. The argument itself is

an empty struct that is never allocated; because it is a struct, it is very unlikely that there will be inadvertent

implicit casts.

The second argument is the number of virtualized timers the component provides. This is usually com-

puted with a uniqueCount(). Because VirtualizeTimerC is a module, instantiating one will allocate the

necessary state. As generic modules create code copies, the lines


components new VirtualizeTimerC(TMilli, 3) as TimerA;

components new VirtualizeTimerC(TMilli, 4) as TimerB;

Listing 7.5: Instantiating two VirtualizeTimerC components

generate two copies of the VirtualizeTimerC’s code, one of which allocates three timers, the other of

which allocates four. Again, nesC does this because different instances of VirtualizeTimerC might have

different types and different constants. For example, the max timers argument can be used in loops, say,

when checking if timers are pending. Rather than go for an object-oriented (or C++ template-like) approach

of passing data pointers around, nesC just creates copies of the code. Because all of these copies come from

a single source file, they are all consistent and don’t create maintenance problems in the way that multiple

source files can.

7.2 HilTimerMilliC: An Example Use of Generic Components

Implementing a solid and efficient timer subsystem is very difficult. T2 makes the task simpler by having

a library of reusable components (lib/timer) that provide many of the needed pieces of functionality.

Each supported microcontroller provides a timer system by layering these library components on top of

some low-level hardware abstractions. Because many microcontrollers have several clock sources, most

of these library components are generic components, so that a platform can readily provide several timer

systems of different fidelities.

For example, this is the full code for HilTimerMilliC on the micaZ platform. It is defined in platfor-

m/mica, which contains common abstractions across the entire mica family (e.g., mica2, mica2dot, micaz):

\#include "Timer.h"

configuration HilTimerMilliC {

provides interface Init;

provides interface Timer<TMilli> as TimerMilli[uint8_t num];

provides interface LocalTime<TMilli>;


implementation {

enum {



components AlarmCounterMilliP, new AlarmToTimerC(TMilli),

new VirtualizeTimerC(TMilli, TIMER_COUNT),

new CounterToLocalTimeC(TMilli);


Init = AlarmCounterMilliP;

TimerMilli = VirtualizeTimerC;

VirtualizeTimerC.TimerFrom -> AlarmToTimerC;

AlarmToTimerC.Alarm -> AlarmCounterMilliP;

LocalTime = CounterToLocalTimeC;

CounterToLocalTimeC.Counter -> AlarmCounterMilliP;

} Listing 7.6: The full code of HilTimerMilliC

The only singleton component in this configuration is AlarmCounterMilliP, which is an abstraction of

a low-level microcontroller timer. HilTimerMilliC uses three generic components on top of AlarmCounter-

MilliP to provide a full timer system:

• CounterToLocalTimerC turns a hardware counter into a local timebase;

• AlarmToTimerC turns an Alarm interface, which provides an async one-shot timer, into a Timer

interface, which provides a sync timer with greater functionality;

• VirtualizeTimerC vitualizes a single Timer into many Timers, specified by an argument to the com-


All of the generics have the type TMilli as one of their arguments. These type arguments make sure

that timer fidelities are not accidentally changed. E.g., VirtualizeTimerC takes a single timer of fidelity

tag and virtualizes it into timer count timers of precision tag. The timer library has components


that translate between precisions.

UQ TIMER MILLI is a #define (from Timer.h) for the string “HilTimerMilliC.Timer”. This is a com-

mon approach in T2. Using a #define makes it harder to run into bugs caused by errors in the string: chances

are that a typo in the define will be a compile time error. This is generally good practice for components that

depend on unique strings.

Programming Hint 13: If a component depends on unique, then #define a string to use in a header file, to

prevent bugs from string typos.

The first thing HilTimerMilliC does is define an enum for the number of timers being used. It assumes

that each timer has wired to TimerMilli with a call to unique(UQ TIMER MILLI). It takes an async, hard-

ware timer — AlarmCounterMilliP — and turns it into a virtualized timer. It does this with three steps. The

first step turns the Alarm (the async timer) into a Timer, with the generic component AlarmToTimerC:


AlarmToTimerC.Alarm -> AlarmCounterMilliP;

Listing 7.7: Turning an Alarm into a Timer

The second step virtualizes a single timer into many timers:

VirtualizeTimerC.TimerFrom -> AlarmToTimerC;

Listing 7.8: Virtualizing a Timer

It then exports the parameterized timer interface:

TimerMilli = VirtualizeTimerC;

Listing 7.9: Exporting the virtualized Timer interface

Additionally, some aspects of the timer system require being able to access a time base, for example, to

specify when in the future a timer fires. So HilTimerMilliC takes a hardware counter and turns it into a local

time component,

CounterToLocalTimeC.Counter -> AlarmCounterMilliP;

Listing 7.10: Wiring to a counter for a time base

then exports the interface:

LocalTime = CounterToLocalTimeC;

Listing 7.11: Exporting the LocalTime interface

Many of the components in the timer library are generics because a platform might need to provide

a wide range of timers. For example, depending on the number of counters, compare registers, and their

width, a platform might provide millisecond, microsecond, and 32kHz timers. The variants of the MSP430

chip family that some platforms use, for example, can easily provide millisecond and 32kHz timers with a

very low interrupt load: their compare registers are 16 bits, so even at 32kHz they only fire one interrupt

every two seconds.

Generic modules work very well for abstractions that have to allocate per-client state, such as timers or

resource arbiters. A generic module allows you to specify the size – the number of clients – in the config-

uration that instantiates the module, rather than within the module itself. For example, if VirtualizeTimerC

were not a generic, then inside its code there would have to be a uniqueCount() with the proper key.


Unlike standard components, generics can only be named by the configuration that instantiates them.

For example, in the case of HilTimerMilliC, no other component can wire to the VirtualizeTimerC that it

instantiates. The generic is private to HilTimerMilliC. The only way it can be made accessible is to export

its interfaces (which HilTimerMilliC does). This is how you can make an instance that many components

can wire to. You create a singleton by writing a configuration with the identical signature and just exporting

all of the interfaces. For example, let’s say you needed a bit vector to keep track of which system services

are running or not. You want many components to be able to access this vector, but BitVectorC is a generic.

So you write a component like this:

configuration SystemServiceVectorC {

provides interface BitVector;


implementation {

components MainC, new BitVectorC(uniqueCount(UQ_SYSTEM_SERVICE));

MainC.SoftwareInit -> BitVectorC;

BitVector = BitVectorC;

} Listing 7.12: The fictional component SystemServiceVectorC

Now many components can refer to this particular bit vector. While you can make a singleton out of a

generic by instantiating it within one, the opposite is not true: a component is either instantiable or not.

7.3 Generic Configurations

Generic modules are a way to reuse code and separate common abstractions into well-tested building blocks

(there only needs to be one FIFO queue implementation, for example). nesC also has generic configurations,

which are a very powerful tool for building TinyOS abstractions and services. However, just as configura-

tions are harder for a novice programmer to understand than modules, generic configurations are a bit more

challenging than generic modules.

The best way to describe what role a generic configuration can play in a software design is to start from

first principles:

A module is a component that contains executable code; A configuration defines relationships between

components to form a higher-level abstraction; A generic module is reusable piece of executable code;

therefore, A generic configuration is a reusable set of relationships that form a higher-level abstraction.

Several examples in this book have mentioned and described HilTimerMilliC. But if you look at TinyOS

code, there is only one component that references it. Although it is a very important component, programs


never directly name it. It is the core part of the timer service, but applications that need timers instantiate a

generic component named TimerMilliC.

Before delving into generic configurations, however, let’s consider what code looks like without them.

Let’s say we have HilTimerMilliC, and nothing more. Many components need timers; HilTimerMilliC

enables this through its parameterized interface. Remember that HilTimerMilliC encapsulates an instance

of VirtualizeTimerC, whose size parameter is a call to unique(UQ TIMER MILLI). This means that if a

component AppP needs a timer, then its configuration AppC must wire it like this:

configuration AppP {...}

implementation {

components AppP, HilTimerMilliC;

AppP.Timer -> HilTimerMilliC.TimerMilli[unique(UQ_TIMER_MILLI)];

} Listing 7.13: Wiring directly to HilTimerMilliC

Now, let’s say that AppP actually needs three timers. The code would look like this:

configuration AppP {...}

implementation {

components AppP, HilTimerMilliC;

AppP.Timer1 -> HilTimerMilliC.TimerMilli[unique(UQ_TIMER_MILLI)];

AppP.Timer2 -> HilTimerMilliC.TimerMilli[unique(UQ_TIMER_MILLI)];

AppP.Timer3 -> HilTimerMilliC.TimerMilli[unique(UQ_TIMER_MILLI)];

} Listing 7.14: Wiring directly to HilTimerMilliC three times

This approach can work fine: it’s how TimerC in TinyOS 1.x works. But it does have some issues.

First, there are references to UQ TIMER MILLI is sprinkled throughout many components in the system:

changing the identifier used is not really possible. This is especially true because a call to unique() with the

incorrect (but valid) parameter will not return an error. For example, if a component did this

AppP.Timer1 -> HilTimerMilliC.TimerMilli[unique(UQ_TIMER_MICRO)];

Listing 7.15: A buggy wiring to HilTimerMilliC

by accident, then there will be two components wiring to the same instance of Timer and the program

will probably exhibit really troublesome behavior. The issue is that a detail about the internal implementation

of the timer system — the key for unique that it uses — has to be exposed to other components. Usually, all

a component wants to do is allocate a new timer. It doesn’t care — and shouldn’t have to care — about how

it is implemented.

7.4. EXAMPLES 89

Generic configurations simplify this by defining a wiring pattern that can be instantiated. For example,

all of above users of timer just want to wire to a millisecond timer; they shouldn’t have to worry about

unique keys, which provide details of the HilTimerMilliC implementation and are an easy source of bugs.

The basic wiring pattern is this:

AppP.Timer1 -> HilTimerMilliC.TimerMilli[unique(UQ_TIMER_MILLI)];

Listing 7.16: The wiring pattern to HilTimerMilliC

The next section describes how TimerMilliC codifies this pattern to simplify using timers. It turns out

that generic configurations have a much broader (and more powerful) set of uses than simple unique key

management, and the next section covers some of these as well.

7.4 Examples

Because generic configurations are a challenging concept, we present four examples of their use for basic

abstractions in the TinyOS core. The examples increase in complexity.

7.4.1 TimerMilliC

The standard millisecond timer abstraction, TimerMilliC, provides this abstraction. TimerMilliC is a generic

configuration that provides a single Timer interface. Its implementation wires this interface to an instance of

the underlying parameterized Timer interface using the right unique key. This means that unique() is called

in only one file; as long as all components allocate timers with TimerMilliC, there is no chance of a key

match mistake. TimerMilliC’s implementation is very simple:

generic configuration TimerMilliC() {

provides interface Timer<TMilli>;


implementation {

components TimerMilliP;

Timer = TimerMilliP.TimerMilli[unique(UQ_TIMER_MILLI)];

} Listing 7.17: The TimerMilliC generic configuration

TimerMilliP is a singleton configuration that auto-wires HilTimerMilliC to the boot sequence and ex-

ports HilTimerMilliC’s parameterized interface:


configuration TimerMilliP {

provides interface Timer<TMilli> as TimerMilli[uint8_t id];


implementation {

components HilTimerMilliC, MainC;

MainC.SoftwareInit -> HilTimerMilliC;

TimerMilli = HilTimerMilliC;

} Listing 7.18: TimerMilliP auto–wires HilTimerMilliC to Main.SoftwareInit

TimerMilliC encapsulates a wiring pattern — wiring to the timer service with a call to unique — for

other components to use. When a component instantiates a TimerMilliC, it creates a copy of the TimerMilliC

TIMER MILLI). The line of code

code, which includes a call to unique(UQ

components X, new TimerMilliC();

X.Timer -> TimerMilliC; Listing 7.19: Instantiating a TimerMilliC

is essentially this:

components X, TimerMilliP;

X.Timer -> TimerMilliP.TimerMilli[unique(UQ_TIMER_MILLI)];

Listing 7.20: Expanding a TimerMilliC Instantiation

TimerMilliP is itself a configuration, which wires to HilTimerMilliC, which is a configuration. When

a component calls Timer.start() on a TimerMilliC, the actual function it invokes is Timer.start() on Virtual-

izeTimerC. Let’s step through the complete wiring path for an application that creates a timer. BlinkAppC

wires the BlinkC module to its three timers:

configuration BlinkAppC{}

implementation {

components MainC, BlinkC, LedsC;

components new TimerMilliC() as Timer0;

components new TimerMilliC() as Timer1;

components new TimerMilliC() as Timer2;

BlinkC -> MainC.Boot;

MainC.SoftwareInit -> LedsC;

BlinkC.Timer0 -> Timer0;

BlinkC.Timer1 -> Timer1;

BlinkC.Timer2 -> Timer2;

BlinkC.Leds -> LedsC;


7.4. EXAMPLES 91

Listing 7.21: The Blink application

Wiring BlinkC.Timer0 to Timer0 establishes this wiring chain (the key to unique, UQ TIMER MILLI,

is elided for readability):

BlinkC.Timer0 -> Timer0.Timer

Timer0.Timer = TimerMilliP.TimerMilli[unique(UQ_TIMER_MILLI)]

TimerMilliP.TimerMilli[unique(UQ_TIMER_MILLI)] = HilTimerMilliC[unique(UQ_TIMER_MILLI)]

HilTimerMilliC[unique(UQ_TIMER_MILLI)] = VirtualizeTimerC.Timer[unique(UQ_TIMER_MILLI)]

Listing 7.22: The full module–to–module wiring chain in Blink (BlinkC to VirtualizeTimerC)

BlinkC and VirtualizeTimerC are the two modules; the intervening components are all configurations.

When nesC compiles this code, all of the intermediate layers will be stripped away, and BlinkC.Timer0.start

will be a direct function call on VirtualizeTimerC.Timer[...].start.

Many of TinyOS’s basic services use this pattern of a generic configuration to managing a keyspace for

a parameterized interface. For example, one of the non-volatile storage abstractions in TinyOS is BlockStor-

ageC (covered in TEP 103). This abstraction is intended for reading and writing large objects in a random

access fashion. This abstraction provides the BlockRead and BlockWrite interfaces. The abstraction sup-

ports there being multiple readers and writers with a similar pattern to what Timer uses, although unlike

Timer only one read or write can be outstanding at any time. The underlying implementation therefore

keeps track of who’s outstanding and enqueues other requests.

7.4.2 AMSenderC

TimerMilliC is reasonably simple: all it really does is encapsulate a wiring with unique() in order to make

sure there aren’t client collisions and in order to simplify wiring. Because HilTimerMilliC has to know the

state of all of the outstanding timers in order to do its job well, it provides a virtualized abstraction, which

TimerMilliC can just export.

Active messages are slightly different. The basic platform active message component, ActiveMessageC,

provides AMSend, parameterized by the AM id. However, ActiveMessageC can only have a single packet

outstanding at any time. If it is already sending a packet and a component calls SendAM.send, ActiveMes-

sageC returns FAIL or EBUSY. From the perspective of a caller, this is a bit of a pain. If it wants to send the

packet, it has to wait until the radio is free, but doesn’t have a very easy way of figuring out when this will



TinyOS 1.x had a global (not parameterized) sendDone event, which the radio would signal whenever

it finished sending any packet. That way, if a component tried to send and received a FAIL, it could try to

resent when it handled the sendDone event. This mostly works, except that if multiple components wire

to sendDone, then the fan-out determines the priority of the send requests. E.g., if a hog of a component

handles sendDone and happens to be first in the fan-out, it will always get first dibs and will monopolize the


T2 solves this problem through the AMSender component, which is a generic configuration. AMSender

is a virtualized abstraction: every instance of AMSender acts like ActiveMessageC. That is, each AMSender

can handle a single outgoing packet. This means that each component that wires to an AMSender can act

independently of the other components, and not worry about fan-out scheduling. The one-deep queue of


ActiveMessageC is replaced by one-deep queues, one for each of the clients.

Each AMSenderC having its own one-deep queue is not sufficient. There’s also the question of what

order the senders get to send their packets. Under the covers, what the active message layer does is maintain


an array of pending packets, where N is the number of AMSenderC components. Each AMSenderC is

a client of the active message sending abstraction, and so has a client ID that indexes into this array. The

implementation keeps track of the last client that was able to send a packet, and makes sure that everyone

else waiting gets a chance before that client does again.

Accomplishing this is a little tricker than TimerMilliC, because a request to send has a few parameters.

With Timer, those parameters (period, single-shot vs. repeat) are state that the timer implementation has

to keep track of in the first place. With the AMSenderC, it’s a bit different: those parameters just need

to be stored until the call to the underlying ActiveMessageC. The send queue could just store all of these

parameters, that uses up 4 extra bytes of RAM per entry (2 for the destination, 1 for the AM type, and 1 for

the length).

It turns out that the Packet and AMPacket interfaces have operations exactly for this situation. They

allow a component to get and set packet fields. For example, a component can call Packet.setLength to

set the length field and recover it with Packet.length. For components that just need basic send or receive

abstractions , they can just use AMSend or Receive. The Packet interface, though, allows data structures

such as queues to store temporary state within the packet and then recover it when it’s time to actually send

so it can be passed as parameters. This means that the AM send queue with n clients allocates a total of (2n

+1) bytes of state, as pointers on microcontrollers are usually 2 bytes (on the intelmote2, though, they’re

four bytes, so it allocates 4n+1).

This means that the AMSenderC abstraction needs to do the following things:

7.4. EXAMPLES 93

1. Provide an AMSend interface

2. Store the AMSend.send parameters before putting a packet on the queue

3. Statically allocate a single private queue entry

4. Store a send request packet in the queue entry when it’s not full

5. When it’s actually time to send the packet, reconstitute the send parameters and call ActiveMessageC

What makes this abstraction more tricky is that there are two keyspaces. ActiveMessageC provides

AMSend based on the AM type keyspace. The send queue, in contrast, has a client ID keyspace for keeping

track of which AMSenderC is sending. Because the queue needs to be able to send any AM type, it uses

a parameterized AMSend and directly wires to ActiveMessageC.AMSend. So the overall structure goes

something like this:

1. Component calls AMSenderC.AMSend.send

2. This calls AMSenderP, which stores the length, AM id, and destination in the packet

3. AMSenderP is a client of AMSendQueueP and call Send.send with its client ID

4. AMSendQueueP checks that the queue entry is free and puts the packet into it.

5. Some time later, AMSendQueueP pulls the packet off the queue and calls AMSend.send on Ac-

tiveMessageC with the parameters that AMSenderP stored.

6. When ActiveMessageC signals AMSend.sendDone, AMSendQueueP signals Send.sendDone to AM-

SenderP, which signals AMSend.sendDone to the original calling component.

This is the code for AMSenderC:

generic configuration AMSenderC(am_id_t AMId) {

provides {

interface AMSend;

interface Packet;

interface AMPacket;

interface PacketAcknowledgements as Acks;



implementation {

components new AMQueueEntryP(AMId) as AMQueueEntryP;


components AMQueueP, ActiveMessageC;

AMQueueEntryP.Send -> AMQueueP.Send[unique(UQ_AMQUEUE_SEND)];

AMQueueEntryP.AMPacket -> ActiveMessageC;

AMSend = AMQueueEntryP;

Packet = ActiveMessageC;

AMPacket = ActiveMessageC;

Acks = ActiveMessageC;

} Listing 7.23: The AMSenderC generic configuration

A send queue entry is just responsible for storing send information in a packet:

generic module AMQueueEntryP(am_id_t amId) {

provides interface AMSend;


interface Send;

interface AMPacket;



implementation {

command error_t AMSend.send(am_addr_t dest,

message_t* msg,

uint8_t len) {

call AMPacket.setDestination(msg, dest);

call AMPacket.setType(msg, amId);

return call Send.send(msg, len);


command error_t AMSend.cancel(message_t* msg) {

return call Send.cancel(msg);


event void Send.sendDone(message_t* m, error_t err) {

signal AMSend.sendDone(m, err);


command uint8_t AMSend.maxPayloadLength() {

return call Send.maxPayloadLength();


command void* AMSend.getPayload(message_t* m) {

return call Send.getPayload(m);


} Listing 7.24: AMSendQueueEntryP

7.4. EXAMPLES 95

The queue itself sits on top of ActiveMessageC:

configuration AMQueueP {

provides interface Send[uint8_t client];


implementation {

components AMQueueImplP, ActiveMessageC;

Send = AMQueueImplP;

AMQueueImplP.AMSend -> ActiveMessageC;

AMQueueImplP.AMPacket -> ActiveMessageC;

AMQueueImplP.Packet -> ActiveMessageC;

} Listing 7.25: AMQueueP

Finally, within AMSendQueueImplP, the logic to send a packet looks like this:


if (current == QUEUE_EMPTY) {



else {

message_t* msg;

am_id_t id;

am_addr_t addr;

uint8_t len;

msg = queue[current];

id = call AMPacket.getType(msg);

addr = call AMPacket.getDestination(msg);

len = call Packet.getLength(msg);

if (call AMSend.send[id](addr, msg, len) == SUCCESS) {



} Listing 7.26: AMSendQueueImplP pseudocode

7.4.3 CC2420SpiC

Another, more complex example of using generic congfigurations is CC2420SpiC. This component provides

access to the CC2420 radio over an SPI bus. When the radio stack software wants to interact with the radio,

it makes calls on an instance of this component. For example, telling the CC2420 to send a packet if there

is a clear channel involves writing to one of the radio’s registers (TXONCCA). To write to the register, the

stack sends a small series of bytes over the bus, which basically say “I’m writing to register number X with


value Y.” The very fast speed of the bus means that small operations such as these can made synchronous

without any significant concurrency problems.

In addition to small register reads and writes, the chip also supports accessing the receive and transmit

buffers, which are 128-byte regions of memory, as well as the radio’s configuration memory, which stores

things such as cryptographic keys and the local address (which is used for determining whether to send an

acknowledgment). These operations are split-phase. For example, before the stack writes to TXONCCA

to send a packet, it must first execute a split-phase write with the CC2420Fifo interface (the receive and

transmit buffers are FIFO memories).

All of the operations boil down to four interfaces:

• CC2420Strobe: Access to command registers. Writing a command register tells the radio to take an

action, such as transmit a packet, clear its packet buffers, or transition to transmit mode. This interface

has a single command, strobe, which writes to the register.

• CC2420Register: Access to data registers. These registers can be both read and written, and store

things such as hardware configuration, addressing mode, and clear channel assessment thresholds.

This interface supports reads and writes as single-phase operations.

• CC2420Ram: Access to configuration memory. This interface supports both reads and writes, as

split-phase operations.

• CC2420Fifo: Access to the receive and transmit FIFO memory buffers. This interface supports both

reads and writes, as split-phase operations. While one can write to the receive buffer, the CC2420

supports this only for debugging purposes.

A component that needs to interact with the CC2420 instantiates an instance of CC2420SpiC:

generic configuration CC2420SpiC() {

provides interface Resource;

provides interface CC2420Strobe as SFLUSHRX;

provides interface CC2420Strobe as SFLUSHTX;

provides interface CC2420Strobe as SNOP;

provides interface CC2420Strobe as SRXON;

provides interface CC2420Strobe as SRFOFF;

provides interface CC2420Strobe as STXON;

provides interface CC2420Strobe as STXONCCA;

provides interface CC2420Strobe as SXOSCON;

provides interface CC2420Strobe as SXOSCOFF;




760.42 KB




+1 anno fa


This book provides a brief introduction to TinyOS programming for TinyOS 2.0 (T2). While it goes into greater depth than the tutorials, there are several topics that are outside its scope, such as the structure and implementation of radio stacks or existing TinyOS libraries. It focuses on how to write nesC code, and explains the concepts and reasons behind many of the nesC and TinyOS design decisions. If you are interested in a brief introduction to TinyOS programming, then you should probably start with the tutorials. If you’re interested in details on particular TinyOS subsystems you should probably consult TEPs (TinyOS Enhancement Proposals), which detail the corresponding design considerations, interfaces, and components. Both of these can be found in the doc/html directory of a TinyOS distribution.

Corso di laurea: Corso di laurea magistrale in ingegneria delle telecomunicazioni
Università: L'Aquila - Univaq
A.A.: 2011-2012

I contenuti di questa pagina costituiscono rielaborazioni personali del Publisher Atreyu di informazioni apprese con la frequenza delle lezioni di Sistemi embedded e studio autonomo di eventuali libri di riferimento in preparazione dell'esame finale o della tesi. Non devono intendersi come materiale ufficiale dell'università L'Aquila - Univaq o del prof Pomante Luigi.

Acquista con carta o conto PayPal

Scarica il file tutte le volte che vuoi

Paga con un conto PayPal per usufruire della garanzia Soddisfatto o rimborsato

Ti è piaciuto questo appunto? Valutalo!

Altri appunti di Sistemi embedded

Programmazione concorrente
Sistemi Embedded
Real-time and embedded operating systems