Interrupts are probably the most important aspect of any embedded system
design and potentially can be responsible for many problems when debugging a
system. Although they are simple in concept, there are many pitfalls that the
unwary can fall into. This chapter goes through the principles behind
interrupts, the different mechanisms that are used with various processor
architectures and provides a set of do’s and don’ts to help guide the designer.
What is an interrupt?
We all experience interrupts at some point during our lives and find
that they either pose no problem at all or they can very quickly cause stress
and our performance decreases. For example, take a car mechanic working in a
garage who not only has to work on the cars but also answer the phone. The
normal work of servicing a car continues throughout the day and the only other
task is answering the phone. Not a problem, you might think — but each incoming
phone call is an interrupt and requires the mechanic to stop the current work,
answer the call and then resume the current work. The time it takes to answer
the call depends on what the current activity is. If the call requires the
machanic to simply put down a tool and pick up the phone, the overhead is
short. If the work is more involved, and the mechanic needs to support a
component's weight so it can be let go and then need to clean up a little
before picking up the phone, the overhead can be large. It can be so long that
the caller rings off and the phone call is missed. The mechanic then has to
restart the work. If the mechanic receives a lot of phone calls, it is possible
that more time is spent in getting ready to answer the call and restarting the
work than is actually spent performing the work. In this case, the current work
will not be completed on time and the overall performance will be greatly
reduced.
With an embedded design, the mechanic is the processor and the current
work is the foreground or current task that it is executing. The phone call is
the interrupt and the time taken to respond to it is the interrupt latency. If
the system is not designed correctly, coping with the interrupts can prevent
the system from completing its work or miss an interrupt. In either case, this
usually causes problems with the system and it will start to misbehave. In the
same way that humans get irrational and start to go away from normal behaviour
patterns when continually interrupted while trying to complete some other task,
embedded systems can also start misbehaving! It is therefore essential to
understand how to use interrupts and perhaps when not to, so that the embedded
system can work correctly.
The impact of interrupts and their processing does not stop there
either. It can also affect the overall design and structure of the system,
particularly of the software that will be running on it. In a well designed
embedded system, it is important to actively design it with interrupts in mind
and to define how they are going to be used. The first step is to define what
an interrupt is.
An interrupt is an event from either an internal or external source
where a processor will stop its current processing and switch to a different
instruction sequence in response to an event that has occurred either
internally or externally. The processor may or may not return to its original
processing. So what does this offer the embedded system designer? The key advantage
of the interrupt is that it allows the designer to split software into two
types: background work where tasks are performed while waiting for an interrupt
and foreground work where tasks are performed in response to interrupts. The
interrupt mechanism is normally transparent to the background software and it
is not aware of the existence of the foreground software. As a result, it
allows soft-ware and systems to be developed in a modular fashion without
having to create a spaghetti bolognese blob of software where all the functions
are thrown together. The best way of explaining this is to consider several
alternative methods of writing software for a simple system.
The system consists of a processor that has to periodically read in data
from a port, process it and write it out. While waiting for the data, it is
designed to perform some form of statistical analysis.
The spaghetti method
In this case, the code is written in a straight sequence where
occasionally the analysis software goes and polls the port to see if there is
data. If there is data present, this is processed before returning to the
analysis. To write such code, there is extensive use of branching to
effectively change the flow of execution from the background analysis work to
the foreground data transfer opera-tions. The periodicity is controlled by two
factors:
•
The number of times the port is
polled while executing the analysis task. This is determined by the data
transfer rate.
•
The time taken between each
polling operation to execute the section of the background analysis software.
With a simple system, this is not too difficult to control but as the
complexity increases or the data rates go up requiring a higher polling rate,
this software structure rapidly starts to fall about and become inefficient.
The timing is software based and therefore will change if any of the analysis
code is changed or extended. If additional analysis is done, then more polling
checks need to be inserted. As a result, the code often quickly becomes a hard
to understand mess.
The situation can be improved through the use of subrou-tines so that
instead of reproducing the code to poll and service the ports, subroutines are
called and while this does improve the structure and quality of the code, it
does not remove the funda-mental problem of a software timed design. There are
several difficulties with this type of approach:
•
The system timing and
synchronisation is completely soft-ware dependent which means that it now
assumes certain processor speeds and instruction timing to provide a re-quired
level of performance.
•
If the external data transfers
are in bursts and they are asynchronous, then the polling operations are
usually inef-ficient. A large number of checks will be needed to ensure that
data is not lost. This is the old polling vs. interrupt argument reappearing.
•
It can be very difficult to debug
because there are multiple element/entry points within the code that perform
the same operation. As a result, there are two asynchronous operations going on
in the system. The software execution and asynchronous incoming data will mean
that the routes from the analysis software to the polling and data transfer
code will be used almost at random. The polling/data transfer software that is
used will depend on when the data arrived and what the background software was
doing. In this way, it makes reproducing errors extremely difficult to achieve
and frequently can be responsible for intermittent problems that are very
difficult to solve because they are difficult to reproduce.
•
The software/system design is now
time referenced as opposed to being event driven. For the system to work, there
are time constraints imposed on it such as the fre-quency of polling which
cannot be broken. As a result, the system can become very inefficient. To use
an office anal-ogy, it is not very efficient to have to send a nine page fax if
you have to be present to insert each page separately. You either stay and do
nothing while you wait for the right moment to insert the next page or you have
to check the progress repeatedly so that you do not miss the next slot.
Using interrupts
An interrupt is, as its name suggests, a way of stopping the current
software thread that the processor is executing, changing to a different
software routine and executing it before restoring the processor’s status to
that prior to the interrupt so that it can continue processing.
Interrupts can happen asynchronously to the operation and can thus be
used very efficiently with systems that are event as opposed to time driven.
However, they can be used to create time driven systems without having to
resort to software-based timers.
To convert the previous example to one using interrupts, all the polling
and data port code is removed from the background analysis software. The data
transfer code is written as part of the interrupt service routine (ISR)
associated with the interrupt gen-erated by the data port hardware. When the
port receives a byte of data, it generates an interrupt. This activates the ISR
which proc-esses the data before handing execution back to the background task.
The beauty of this type of operation is that the background task can be written
independently of the data port code and that the whole timing of the system is
now moved from being depend-ent on the polling intervals to one of how quickly
the data can be accessed and processed.
Related Topics
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.