Thursday, August 27, 2009

RESOURCE ALLOCATION GRAPH

  • P1 is holding an instance of R2 and requeas instance os R1
  • P2 is holding an instance of R1
  • P3 is holding an instance of R1 and instance of R2
  • P4 is holding instance of R2


  • P1 is holding instance of R2 and request instance of R2 and requeast of R1
  • P2 is holding instance of R1 and R2 then requeat of instance of R3
  • P3 is holding instance of R3 and request instance of R2






















RESOURCE ALLOCATION GRAPH

  • V is partitioned into two types:

P={P1,P2,...Pn} , the set in consisteing of all the processes in the system.
R={R1,R2,...Rm}, the set consisting of all resources types in the system.
  • request edge - directed edge P1-->Rj
  • assisnment edge - directed edge Rj-->Pi
V is partitioned into two types:
V is partitioned into two types:
P={

Thursday, August 20, 2009

deadlock

1.Deadlock Characterization

necessary conditions:

1) Mutual exclusion:
• at least one shared resource is held

2) Hold and wait:
• a process must be holding at least one resource andwaiting for another

3) No preemption:
• cannot steal a resource away from a process

4) Circular wait:
• E.g., A is waiting for B who is waiting for C who iswaiting for A.

deadlock

2.methods for handling deadlock
3.deadlock prevention

*negate one of the necessary conditionsNegating Mutual Exclusion :

example:shared use of a printer

»give exclusive use of the printer to each user in turn wanting to print?
»deadlock possibility if exclusive access to another resource also allowed
»better to have a spooler process to which print jobs are sent

-­complete output file must be generated first

example: file system actions

»give a process exclusive access rights to a file directory

example: moving a file from one directory to another

­-possible deadlock if allowed exclusive access to two directories simultaneously
­-should write code so as only to need to access one directory at a time

deadlock

4.deadlock detection

• Allow system to enter deadlock state

• Detection algorithm

•Recovery scheme
5.deadlock recovery
•Pre-emption–take resources from a process and give to others

–how to select a victim?

»order of precedence for pre-empting
»number of resources already held
»how many more will it need to complete?
»amount of CPU time already used–swapping process out of memory may be sufficient
»but may still hold resources involved in deadlock

–may need to roll pre-empted process back

»back to some safe restart point or go back to beginning
»may need to checkpoint processes
»not convenient for user interaction!

–need to avoid starvation of a low priority process always being pre-empted

»include number of previous pre-emptions as a choice factor

Sunday, August 16, 2009

real-time scheduler

The real-time scheduler (Scheduler/RealTime../ns-2/scheduler.cc) attempts to synchronize the execution of events with real-time. It is currently implemented as a subclass of the list scheduler. The real-time capability in ns is still under development, but is used to introduce an ns simulated network into a real-world topology to experiment with easily-configured network topologies, cross-traffic, etc. This only works for relatively slow network traffic data rates, as the simulator must be able to keep pace with the real-world packet arrival rate, and this synchronization is not presently enforced.

Thursday, August 13, 2009

Multi-processor scheduling

*Will consider only shared memory multiprocessor

*Salient features:
-One or more caches: cache affinity is important
-Semaphores/locks typically implemented as spin-locks: preemption during critical sections
thread scheduler


thread scheduler, part of the OS (usually) that is responsible for sharing the available CPUs out between the various threads. How exactly the scheduler works depends on the individual platform, but various modern operating systems (notably Windows and Linux) use largely similar techniques that we'll describe here. We'll also mention some key varitions between the platforms.

Note that we'll continue to talk about a single thread scheduler. On multiprocessor systems, there is generally some kind of scheduler per processor, which then need to be coordinated in some way. (On some systems, switching on different processors is staggered to avoid contention on shared scheduling tables.) Unless otherwise specified, we'll use the term thread scheduler to refer to this overall system of coordinated per-CPU schedulers.

Across platforms, thread scheduling1 tends to be based on at least the following criteria:

****a priority, or in fact usually multiple "priority" settings that we'll discuss below;

****a quantum, or number of allocated timeslices of CPU, which essentially determines the amount of CPU time a thread is allotted before it is forced to yield the CPU to another thread of the same or lower priority (the system will keep track of the remaining quantum at any given time, plus its default quantum, which could depend on thread type and/or system configuration);

****a state, notably "runnable" vs "waiting";

****metrics about the behaviour of threads, such as recent CPU usage or the time since it last ran (i.e. had a share of CPU), or the fact that it has "just received an event it was waiting for".

Most systems use what we might dub priority-based round-robin scheduling to some extent. The general principles are:

****a thread of higher priority (which is a function of base and local priorities) will preempt a thread of lower priority;

****otherwise, threads of equal priority will essentially take turns at getting an allocated slice or quantum of CPU;

****there are a few extra "tweaks" to make things work

Saturday, August 8, 2009

CPU SCHEDULING

CPU Scheduling

CPU and I/O Burst Cycle


  • The execution of a process consists of a cycle of CPU execution and I/O wait.

A process begins with a CPU burst, followed by an I/O burst, followed by another CPU burst and so on. The last CPU burst will end will a system request to terminate the execution.

  • The CPU burst durations vary from process to process and computer to computer.

An I/O bound program has many very short CPU bursts.

A CPU bound program might have a few very long CPU bursts.


Types of Scheduling

The key to the multiprogramming is scheduling. There are four types of scheduling that an OS has to perform. These are:

  • Long Term scheduling

>>>The long term scheduling determines which programs are admitted to the system for processing. Thus, it controls the level of multiprogramming.

>>>Once admitted, a job or a user program becomes a process and is added to the queue for the short term scheduling (in some cases added to a queue for medium term scheduling).

>>>Long term scheduling is performed when a new process is created.

>>>The criteria used for long-term scheduling may include first-come-first serve, priority, expected execution time, and I/O requirements.

  • Medium-Term Scheduling

>>>The medium-term scheduling is a part of swapping function. This is a decision to add a process to those that are at least partially in main memory and therefore available for execution.

>>>The swapping-in decision is made on the need to manage the degree of multiprogramming and the memory requirements of the swapped-out process.

>>>Short-Term Scheduling

>>>A decision of which ready process to execute next is made in short-term scheduling.

>>>I/O Scheduling

>>>The decision as to which process’s pending I/O requests shall be handled by the available I/O device is made in I/O scheduling.

CPU Scheduler

  • Whenever, the CPU becomes idle, the OS must select one of the processes in the ready-queue to be executed.

The selection process is carried out the short-term scheduler or CPU scheduler. The CPU scheduler selects a process from the ready queue and allocates the CPU to that process.

  • CPU scheduling decisions may take place when a process:

1.The running process changes from running to waiting state (current CPU burst of that process is over).

2.The running process terminates

3. A waiting process becomes ready (new CPU burst of that process begins)

4. The current process switches from running to ready stat (e.g. because of timer interrupt).

  • Scheduling under 1 and 2 is nonpreemptive.

Once a process is in the running state, it will continue until it terminates or blocks itself.

  • Scheduling under 1 and 2 is preemptive.

Currently running process may be interrupted and moved to the Ready state by OS.

Allows for better service since any one process cannot monopolize the processor for very long

Dispatcher

  • Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves:

Switching context

Switching to user mode

Jumping to the proper location in the user program to restart that program

  • Dispatch latency – time it takes for the dispatcher to stop one process and start another running.



Substantial info about threads

Processes, threads, parallelism in real-time systems

1.) The genesis and the rationale of the idea of multithread processes are analyzed, and the following related issues are discussed: support of different granularities of parallelism, explicit vs. implicit intrathreads synchronization, kernel vs. user support of threads, relations with the object-oriented paradigm, relations with remote procedure calls (RPCs) and distributed systems, and relations with open systems. The focus is on the application of multithread processes in real-time systems and on the way parallelism can be better exploited in these systems

2.) Linux

The POSIX thread libraries are a standards based thread API for C/C++. It allows one to spawn a new concurrent process flow. It is most effective on multi-processor or multi-core systems where the process flow can be scheduled to run on another processor thus gaining speed through parallel or distributed processing. Threads require less overhead than "forking" or spawning a new process because the system does not initialize a new system virtual memory space and environment for the process. While most effective on a multiprocessor system, gains are also found on uniprocessor systems which exploit latency in I/O and other system functions which may halt process execution. (One thread may execute while another is waiting for I/O or some other system latency.) Parallel programming technologies such as MPI and PVM are used in a distributed computing environment while threads are limited to a single computer system. All threads within a process share the same address space. A thread is spawned by defining a function and its arguments which will be processed in the thread. The purpose of using the POSIX thread library in your software is to execute software faster.

3.) Java

Modern programming languages and operating systems encourage the use of threads to exploit concurrency and simplify program structure. An integral and important part of the Java language is its multithreading capability. Despite the portability of Java threads across almost all platforms, the performance of Java threads varies according to the multithreading support of the underlying operating system and the way Java Virtual Machine maps Java threads to the native system threads. In this paper, a well-known compute-intensive benchmark, the EP benchmark, was used to examine various performance issues involved in the execution of threads on two different multithreaded platforms: Windows NT and Solaris. Experiments were carried out to investigate thread creation and computation behavior under different system loads, and to explore execution features under certain extreme situations such as the concurrent execution of very large number of Java threads. Some of the experimental results obtained from threads were compared with a similar implementation using processes. These results show that the performance of Java threads differs depending on the various mechanisms used to map Java threads to native system threads, as well as on the scheduling policies for these native threads. Thus, this paper provides important insights into the behavior and performance of Java threads on these two platforms, and highlights the pitfalls that may be encountered when developing multithreaded Java programs.