- P1 is holding an instance of R2 and requeas instance os R1
- P2 is holding an instance of R1
- P3 is holding an instance of R1 and instance of R2
- P4 is holding instance of R2
CPU and I/O Burst Cycle
A process begins with a CPU burst, followed by an I/O burst, followed by another CPU burst and so on. The last CPU burst will end will a system request to terminate the execution.
An I/O bound program has many very short CPU bursts.
A CPU bound program might have a few very long CPU bursts.
>>>The long term scheduling determines which programs are admitted to the system for processing. Thus, it controls the level of multiprogramming.
>>>Once admitted, a job or a user program becomes a process and is added to the queue for the short term scheduling (in some cases added to a queue for medium term scheduling).
>>>Long term scheduling is performed when a new process is created.
>>>The criteria used for long-term scheduling may include first-come-first serve, priority, expected execution time, and I/O requirements.
>>>The medium-term scheduling is a part of swapping function. This is a decision to add a process to those that are at least partially in main memory and therefore available for execution.
>>>The swapping-in decision is made on the need to manage the degree of multiprogramming and the memory requirements of the swapped-out process.
>>>Short-Term Scheduling
>>>A decision of which ready process to execute next is made in short-term scheduling.
>>>I/O Scheduling
>>>The decision as to which process’s pending I/O requests shall be handled by the available I/O device is made in I/O scheduling.
The selection process is carried out the short-term scheduler or CPU scheduler. The CPU scheduler selects a process from the ready queue and allocates the CPU to that process.
1.The running process changes from running to waiting state (current CPU burst of that process is over).
2.The running process terminates
3. A waiting process becomes ready (new CPU burst of that process begins)
4. The current process switches from running to ready stat (e.g. because of timer interrupt).
Once a process is in the running state, it will continue until it terminates or blocks itself.
Currently running process may be interrupted and moved to the Ready state by OS.
Allows for better service since any one process cannot monopolize the processor for very long
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program
Modern programming languages and operating systems encourage the use of threads to exploit concurrency and simplify program structure. An integral and important part of the Java language is its multithreading capability. Despite the portability of Java threads across almost all platforms, the performance of Java threads varies according to the multithreading support of the underlying operating system and the way Java Virtual Machine maps Java threads to the native system threads. In this paper, a well-known compute-intensive benchmark, the EP benchmark, was used to examine various performance issues involved in the execution of threads on two different multithreaded platforms: Windows NT and Solaris. Experiments were carried out to investigate thread creation and computation behavior under different system loads, and to explore execution features under certain extreme situations such as the concurrent execution of very large number of Java threads. Some of the experimental results obtained from threads were compared with a similar implementation using processes. These results show that the performance of Java threads differs depending on the various mechanisms used to map Java threads to native system threads, as well as on the scheduling policies for these native threads. Thus, this paper provides important insights into the behavior and performance of Java threads on these two platforms, and highlights the pitfalls that may be encountered when developing multithreaded Java programs.