Unfortunately, Linux is not the utopian operating system for all applications. Its need for memory management, many megabytes of RAM and large mass storage (>40 Mbytes) immedi-ately limits the range of hardware platforms it can successfully run on. Mass storage is not only used for holding file data. It also provides, via its virtual operating system and memory manage-ment scheme, overflow storage for applications which are to big to fit in the system RAM all at once. Its use of a non-real-time scheduler, which gives no guarantee as to when a task will complete, further excludes UNIX from many applications.
Through its use of memory management to protect its resources, the simple method of writing an application task which drives a peripheral directly via its physical memory is rendered almost impossible. Physical memory can be accessed via the slow ‘/dev/mem’ file technique or by incorporating a shared memory driver, but these techniques are either very slow or restrictive. There is no straightforward method of using or accessing the system interrupts and this forces the user to adopt polling tech-niques.
In addition, there can be considerable overheads in manag-ing all the look up tables and checking access rights etc. These overheads appear on loading a task, during any memory allocation and when any virtual memory system needs to swap memory blocks out to disk. The required software support is usually performed by an operating system. In the latter case, if system memory is very small compared with the virtual memory size and application, the memory management driver will con-sume a lot of processing power and time in simply moving data to and from the disk. In extreme cases, this overhead starts to dominate the system — which is working hard but achieving very little. The addition of more memory relieves the need to swap and releases more of the processing power to execute the application.
Finally, the system makes extensive use of disk caching techniques, which use RAM buffers to hold recent data in memory for faster access. This helps to reduce the system performance degradation, particularly when used with a combination of exter-nal page swapping and slow mass storage. The system does not write data immediately to disk but stores it in a buffer. If a power failure occurs, the data may only be memory resident and is therefore lost. As this can include directory structures from the superblock, it can corrupt or destroy files, directories or even entire systems! Such systems cannot be treated with the contempt other, more resilient, operating systems can tolerate — Linux systems have to be carefully started, shut down, administered and backed up.
One of the more interesting things about the whole Linux movement is that give the developers a problem and someone somewhere will find a way round the problem and come up with a new version. Given that Linux in its initial form is not ideal for embedded systems, can an embedded real-time version be created that would allow the wealth of Linux software to be executed and reused? The answer has been yes.