Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
Scarica il documento per vederlo tutto.
vuoi
o PayPal
tutte le volte che vuoi
DMA
Besides the CPU, that is responsible for controlling the address, data and control bus, we have the DMA controller that is able to perform the same operations as the CPU, concurrently with other activities of the processor. Whenever the DMA is instructed by the CPU, it can perform memory read/write cycles.
The DMA is able to perform only a write cycle or a read cycle, nothing else, and it can only do it when activated by the processor. When using the DMA, the processor is involved only once to activate the transfer so that it can perform other operations meanwhile. Unfortunately, the bus is only one, which means that if the DMA is using the memory, the CPU cannot do the same, it can only do other kind of operations (or wait). DMA and CPU cannot use concurrently the memory.
There are 3 different ways to program the DMA to not conflict with the processor:
- Burst: the transfer is happening in a single operation and if the CPU needs the bus it must wait. This is the fastest transfer mode.
because there is no interruption, however there could be problems with urgent requests to be managed by the CPU.
2. Cycle stealing: the chunk of data is divided in sub-chunks; after each transfer the CPU is able to access the memory. The CPU will not be locked for a huge amount of time.
3. Transparent: the DMA works only when the CPU is not using the bus. This is the slowest exchange.
OS overview
What is the purpose of an operating system? It is to manage the basic software and abstract it at the application level. Let's say we need to program a certain device that requires to access some hardware, for example a led. What are we doing physically? Creating a waveform that switch on/off the led. We do not need to know what is behind the function LED_toggle, because what we need is the result and the function that allows to do it. The basic software developer abstracted the details associated with specific functionalities of the hardware, so that the work of the application developer is
The application program is developed on the basis of the abstractions provided by the operating system. The OS abstracts the details of the hardware. At the application level only the system call interface is known. The system call interface is the set of functions that are available to the application developer to access the hardware.
The application developer should know only the name of the function to access some hardware, the OS developer should know all the components inside the function.
Operating system architecture
There are 3 main operating system architectures:
- Flat architecture
- Layered Architecture with Monolithic Kernel
- Microkernel
Flat architecture
The flat architecture is intended to provide most of the functionalities in the least space. Let's say that we have a limited amount of resources available, then each bit counts. We must save memory resources. Within the memory there is no distinction between the application and the operating system.
This is a strong limitation, because if for any reason there is a bug in the application, this is able to corrupt the operating system, stopping the entire embedded system. This happened in the unintended acceleration from Toyota. A bug caused a corruption of the operating system. Nevertheless, this model is used by a 29 number of different operating systems (Micrium, OSES…) because in most of the applications, resources are very limited.
The main characteristic of such an architecture is that the user address space (dedicated for the application) and the kernel address space (dedicated for the basic software) are not distinguished on the memory.
Let’s try to analyse a real code implemented with Micrium OS. The operating systems is implemented as a set of C functions that are called according to a certain order, starting from the main function. The operating system is initialized within the main function; it will start to provide the abstraction used by the application. 30
Then a task
is created, and we enter into an endless loop. All the functionalities are implemented as C functions that can be invoked as in any C program. The entire system is a set of C functions and our application is another C function embedded along with the C functions that implement the OS. There is no distinction between what is application and what is OS. In fact, when we built our code, we had just one file as output (S32DS.elf) containing both the OS and the application. With Windows we have a bundle of executable which is Windows, then we have a bundle of executable that are the applications (clean distinction between the OS and the application).
To resume: we use this architecture when we have limited resources, but the main drawback is that in the presence of a bug in the application, the entire embedded system is corrupted since there is no distinction between the two of them. We get simplicity (compact memory footprint), but we have less robustness.
Layered architecture with monolithic
To its functionalities. But what if the system cannot be rebooted? There must be a feature that allows to add components to the kernel while running. Kernel modules solve this problem by injecting modules into the kernel while it is running, it is like starting a new application, but it is started in the kernel space. This solution is needed when adding functionalities to the kernel without rebuilding it. 33 Unfortunately, there is no protection against internal errors.
Microkernel To solve the problem of monolithic kernel inner errors, the idea is to keep at minimum the part of code that is actually critical (which means that kill the entire system). In this scenario there is a very little kernel which runs into its own address space and has very few functions available (services and CPU manager). These lines are very well analysed so that bugs are very hard to happen, i.e. no system crash. All the other applications are developed in the user address space. If something goes wrong in that component,
The error remains in that area. If a driver is bugged, the bug will kill the driver and only the driver. In this case, the risk of having corruption of the OS is reduced. In avionic this is the OS architecture that is used.
Why OS aren't developed following the microkernel architecture, since it is the safest one? Because the added robustness is paid with a reduced efficiency.
Let's say that we have an application that has to communicate with a device driver.
- In the monolithic kernel, the application has to call the driver asking for its service. Since there are only two layers, and the device driver is within the monolithic kernel, the connection between them is performed twice, the first time when the driver is called and the second time when the driver responds. Each time the address space is changed a certain time is spent (such an operation is called context switch).
- In the microkernel, the context switches are 4 instead of 2, because the application asks
themicrokernel (1), the microkernel asks the device driver (2) and receives an answer (3) that is again communicated to the application (4). It is true that it is robust, but the efficiency is less.
Process management A
Process in memory
The CPU manager allows to use in the most efficient way the CPU. The goal is to find a way to use the CPU at 100% in any situation. The CPU runs at 150 MHz and with this power we must perform an infinite amount of actions.
We will focus on program under execution, which means process/task. Process or task refer to program in execution. The memory image of the process is the following one: there are 6 areas present. Each area has a certain name.
- .text: name that refers to the program area, bytes of the memory image of the process that contains the instruction executed by the process.
- .rodata: defines the variable that are initialized and will not change their value during the execution.
- .data: variables that are initialized and will change
their value during the execution of the program.
- .bss: set of memories cells that contains the variable not initialized and that will change their value during the execution.
- Stack area and Heap area that are used for dynamic memory management.
Example: alpha is a const variable initialized to 25, this means that it will be always read as 25, will not change over time. Beta is initialized as 44 but it can change over time. Tmp is a global variable but not initialized. Foo is a function. All the instructions needed to implement the foo function are part of the program area. Beta is a global variable initialized (.data), alpha is a constant global variable (.rodata), tmp is a global variable not initialized, hence it will go inside .bss. Why making this subdivision keeping all the same elements in the same place? The technical reason regards the memory. If we considered all the variables as volatile memory, it could happen to finish the memory availability. Therefore, it is
convenientto place some of the elements in the non-volatile memory. Unfortunately, it is not