Home | | Embedded and Real Time Systems | | Embedded and Real Time Systems | Important Questions and Answers: Process and Operating Systems

Important Questions and Answers: Process and Operating Systems - | Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail |

Chapter: Embedded and Real Time Systems - Process and Operating Systems

Important Questions and Answers: Process and Operating Systems

Embedded and Real Time Systems - Process and Operating Systems - Important Questions and Answers: Process and Operating Systems

PROCESS AND OPERATING SYSTEMS

 

1. What are the states of a process?

Running

Ready

Waiting

 

2. What is the function in steady state?

Processes which are ready to run but are not currently using the processor are in the 'ready' state.

 

3. Define scheduling.

 

This is defined as a process of selection which says that a process has the right to use the processor at given time.

 

4. What is scheduling policy?

 

It says the way in which processes are chosen to get promotion from ready state to running state.

 

5. Define hyper period?

 

It refers the duration of time considered and also it is the least common multiple of all the processes.

 

6. What is schedulability?

 

It indicates any execution schedule is there for a collection of process in the system's functionality.

 

7. What are the types of scheduling?

Time division multiple access scheduling.

 

Round robin scheduling.

 

8. What is cyclostatic scheduling?

 

In this type of scheduling, interval is the length of hyper period 'H'. For this interval, a cyclostatic schedule is separated into equal sized time slots.

 

9. Define round robin scheduling?

 

This type of scheduling also employs the hyperperiod as an interval. The processes are run in the given order.

 

10. What is scheduling overhead?

It is defined as time of execution needed to select the next execution process.

 

11. What is meant by context switching?

The actual process of changing from one task to another is called a context switch.

 

12. Define priority scheduling?

A simple scheduler maintains a priority queue of processes that are in the runnable state.

 

13. What is rate monotonic scheduling?

 

Rate monotonic scheduling is an approach that is used to assign task priority for a preemptive system.

 

14. What is critical instant?

It is the situation in which the process    or         task posses’ highest response time.

 

15. What is critical instant analysis?

 

It is used to know about the schedule of a system. Its says that based on the periods given, the priorities to the processes has to be assigned.

 

16. Define earliest deadline first scheduling?

 

This type of scheduling is another task priority policy that uses the nearest deadline as the criterion for assigning the task priority.

 

17. What is IDC mechanism?

 

It is necessary for a 'process to get communicate with other process' in order to attain a specific application in an operating system.

 

18. What are the two types of communication?

Blocking communication        

Non blocking communication

 

19. Give the different styles of inter process communication?

Shared memory.

Message passing.




1. Explain Multiple Tasks and Multiple Processes?

 

Many embedded computing systems do more than one thing that is, the environment can cause mode changes that in turn cause the embedded system to behave quit differently.

 

The text compression box provides a simple example of rate control problems. A control panel on a machine provides an example of a different type of rate control problem, the asynchronous input.

 

Multirate embedded computing systems are very common, including automobile engines, printers, and telephone PBXs.

 

The co-routine was a programming technique commonly used in the early days of embedded computing to handle multiple processes.

 

The ARM code in the co-routines is not intended to represent meaningful computations.

 

The co-routine structure lets us implement more general kinds of flow of control than is possible with only subroutines; the identification of co-routine entry points provides us with some hooks for nonhierarchical calls and returns within the program.

 

However, the co-routine does not do nearly enough to help us construct complex programs with significant timing properties.

 

The co-routine in general does very little to simplify the design of code that satisfies timing requirements.

 

 

2. Explain Context Switching?

 

The context switch is the mechanism for moving the CPU from one executing process to another.

 

Clearly, the context switch must be bug-free-a process that does not look at a real-time clock should not be able to tell that it was stopped and then restarted.

 

Cooperative multitasking-the most general form of context switching, preemptive multitasking.

 

Preemptive Multitasking-the interrupt is an ideal mechanism on which to build context switching for preemptive multitasking.

 

A timer generates periodic interrupts to the CPU.

The interrupt handler for the timer calls the operating system, which saves the previous process’s state in an activation record, selects the next process to execute, and switches the context to that process.

 

Processes and Object-Oriented Design-UML often refers to processes as active objects, that is, objects that have independent threads of control.

 

We can implement the preemptive context switches using the same basic techniques.

 

The only difference between the two is the triggering event, voluntary release of the CPU in the case of cooperative and timer interrupt in the case of preemptive.

 

 

3. Explain Scheduling policies?

 

A scheduling policy defines how processes are selected for promotion from the ready state to the running state.

 

Utilization is one of the key metrics in evaluating a scheduling policy.

 

Rate-monotonic scheduling (RMS), introduced by Liu and Layland [Liu73], was one of the first scheduling policies developed for real-time systems and is still very widely used.

 

The theory underlying RMS is known as rate-monotonic analysis(RMA).

 

Earliest deadline first (EDF) is another well-known scheduling policy. It is a dynamic priority scheme-it changes process priorities during execution based on initiation times.

 

RMS VERSUS EDF-EDF can extract higher utilization out of the CPU, but it may be difficult to diagnose the possibility of an imminent overload.

 

A Closer Look at Our Modeling Assumptions-Our analyses of RMS and EDF have made some strong assumptions.

 

Other POSIX Scheduling Policies-In addition of SCHED_FIFO,POSIX supports two other real-time scheduling policies:SCHED_RR and SCHED_OTHER.

 

The SCHED_OTHER is defined to allow non-real-time processes to intermix with real-time processes.

 

 

4. Explain Inter process Communication Mechanism?

 

Signals-Unix supports another, very simple communication mechanism-the signal.

 

A signal is simple because it does not pass data beyond the existence of the signal itself.

 

Signals in UML-A UML signal is actually a generalization of the Unix signal. While a Unix signal carries no parameters other than a condition code,a UML signal is an object.

 

Shared Memory Communication-Conceptually, semaphores are the mechanism we use to make shared memory safe.

 

POSIX supports semaphores, but it also supports a direct shared memory mechanism.

 

POSIX supports counting semaphores in the _POSIX_SEMAPHORES option. A counting semaphore allows than one process access to a resource at a time.

 

Message-Based Communication:-The shell syntax of the pipe is very familiar to Unix users. An example appears below.

 

%  foo file1| baz > file2

A parent process use the pipe() function to create a pipe to talk to a child. It must do so before the child itself is created or it won’t have any way to pass a pointer to the pipe to the child.

 

The pipe () function returns an array of file descriptors, the first for the write end and the second for the read end.

 

 

5. Explain Shared Memory Communication and Message-Based Communication?

 

Shared Memory Communication:-Conceptually, semaphores are the mechanism we use to make shared memory safe.

 

POSIX supports semaphores, but it also supports a direct shared memory mechanism.

POSIX supports counting semaphores in the _POSIX_SEMAPHORES option.

A counting semaphore allows more than one process to a resourse at a time.

 

If the semaphore allows up to N resources, then it will not block until N processes have simultaneously passed the semaphore.

 

Message-Based Communication:-

 

The shell syntax of the pipe is very familiar to Unix users. An example appears below

 

% foo file1 | baz > file2

 

POSIX also supports message queues under the _POSIX_MESSAGE_PASSING facility.

The advantage of a queue over a pipe is that, since queues have names, we don’t have to create the pipe descriptor before creating the other process using it, as with pipes.


Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail


Copyright © 2018-2021 BrainKart.com; All Rights Reserved. (BS) Developed by Therithal info, Chennai.