Buffer exchange
Buffer exchange is a technique that is used to simplify the control code
and allow multiple tasks to process data simultane-ously without having to have
control structures to supervise access. In many ways it is a variation of the
double buffering technique.
This type of mechanism is common to the SPOX operating system used for
DSP processors and in these types of embedded systems it is relatively simple
to implement.
The main idea of the system is the concept of exchanging empty buffers
for full ones. Such a system will have at least two buffers although many more
may be used. Instead of normally using a read or write operation where the data
to be used during the transfer is passed as a parameter, a pointer is sent
across that points to a buffer. This buffer would contain the data to be
transferred in the case of a write or simply be an empty buffer in the case of
a read. The command is handled by a device driver which returns another pointer
that points to a second buffer. This buffer would contain data with a read or
be empty with a write. In effect what happens is that a buffer is passed to the
driver and another received back in return. With a read, an empty buffer is
passed and a buffer full of data is returned. With a write, a full buffer is
passed and an empty one is received. It is important to note that the buffers
are different and that the driver does not take the passed buffer, use it and
then send it back. The advantages that this process offers are:
•
The data is not copied between
the device driver and the requesting task.
•
Both the device driver and the
requesting task have their own separate buffer area and there is thus no need
to have semaphores to control any shared buffers or memory.
•
The requesting task can use
multiple buffers to assimilate large amounts of data before processing.
•
The device driver can be very
simple to write.
•
The level of inter-task communication
to indicate when buffers are full or ready for collection can be varied and
thus be designed to fit the end application or system.
There are some disadvantages however:
•
There is a latency introduced
dependent on the size of the buffer in the worst case. Partial filling can be
used to reduce this if needed but requires some additional control to sig-nify
the end of valid data within a buffer.
Many implementations assume a fixed buffer size which is predetermined,
usually during the software compilation and build process. This has to be big
enough for the largest message but may therefore be very inefficient in terms
of memory usage for small and simple data. Variable size buffers are a solution
to this but require more complex control to handle the length of the valid
data. The buffer size must still be large enough for the biggest message and
thus the problem of buffer size granularity may come back again.
•
The buffers must be accessible by
both the driver and requesting tasks. This may seem to be very obvious but if
the device driver is running in supervisor mode and the requesting task is in
the user mode, the memory manage-ment unit or address decode hardware may
prevent the correct access. This problem can also occur with segmented
architectures like the 8086 where the buffers are in different segments.
Related Topics
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.