Thursday, July 30, 2009

1.Thread
  • single threaded process - Single threaded programs have one path of execution, and multi-threaded programs have two or more paths of execution. Single threaded programs can perform only one task at a time, and have to finish each task in sequence before they can start another. For most programs, one thread of execution is all you need, but sometimes it makes sense to use multiple threads in a program to accomplish multiple simultaneous tasks.
  • multi threaded process - This is multithreading, and generally occurs by time slicing (similar to time-division multiplexing) across the computer systems. However, in a single processor environment, the processor 'context switches' between different threads. In this case, the processing is not literally simultaneous, for the single processor is really doing only one thing at a time. This switching can happen so fast as to give the illusion of simultaneity to an end user.
2. Benefit of multi-threaded programming

Benefits of Multithreaded programming can be broken down into four major categories:

1. Responsiveness: Multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user.

2. Resource sharing: By default, threads share the memory and the resources of the process to which they belong. The benefit of code sharing is that it allows an application to have several different threads of activity all within the same address space.

3. Economy: Allocating memory and resources for process creation is costly. Alternatively, because threads share resources of the process to which they belong, it is more economical to create and context switch threads.

4. Utilization of multiprocessor architectures: The benefits of multithreading can be greatly increased in a multiprocessor architecture, where each thread may be running in parallel on a different processor. A single-threaded process can be only run on one CPU, no matter how many are available. Multithreading on a multi-CPU machine increases concurrency.
3. User thread - Threads can also be built in user space. This means that a library or program is responsable for scheduling and executing threads. When this is done in user space, there is still a penalty for a context switch. However, the cost is less than an operating system context switch. Sometimes, user space threads are called fibers, to suggest that they are "lighter" than kernel threads. For the remainder of this article, I will refer to user space threads as fibers, and kernel threads as threads. Fibers have an additional advantage over kernel threads: Only one thread can modify a shared resource at a time, since only one fiber can be executing at a time.
4.kernel thread - support threads in the kernel, which means that the scheduling of which process is supposed to run is done by the operating system. Kernel threads allow the operating system to schedule different threads to run on different processors in multiprocessor computers, which can be an enormous performance increase.
5. thread library - a basic C thread library, libfiber, was written. It is implemented using two techniques for fibers and Linux kernel threads. This library provides an extremely simple implementation for creating, destroying and scheduling fibers or threads. It should only be used as an example for learning about how threads are implemented, since there are many issues with signals and synchronization. For real applications there are more polished libraries such as pthreads for kernel threads or GNU Portable Threads (Pth) for user space threads.
6. Multi threading models - generally occurs by time-division multiplexing (as in multitasking): the processor switches between different threads.

  • many-to-one model - maps many user-level threads to one kernel thread.
  • one-to-one model - maps each user thread to a kernel thread. It provides mode concurrency than the many-to-one model by allowing another thread to run when a thread makes a blocking system call.
  • many-to-many model -The many-to-many model multiplexes many user-level threads to a smaller or equal number of kernel threads. The number of kernel threads may be specific to either a particular application or a particular machine.
Enter process communication

1.direct communication
–each process wanting to communicate must explicitly name the recipient or sender of the communication
–send and receive primitives defined:
send ( P, message ) : send a message to process P
receive ( Q, message ) : receive a message from process Q


2.indirect communication - messages sent to and received from mailboxes (or ports) »mailboxes can be viewed as objects into which messages placed by processes and from which messages can be removed by other processes
–each mailbox has a unique ID
–two processes can communicate only if they have a shared mailbox

send ( A, message ) : send a message to mailbox A receive
receive ( A, message ) : receive a message from mailbox A

3. synchronization -
send and receive operations blocking
»sender is suspended until receiving process does a corresponding read

»receiver suspended until a message is sent for it to receive

  • blocking send - when a process sends a message it blocks until the message is received at the destination.
  • non blocking send - After sending a message the sender proceeds with its processing without waiting for it to reach the destination.
  • blocking receive - When a process executes a receive it waits blocked until the receive is completed and the required message is received.
  • non blocking receive - The process executing the receive proceeds without waiting for the message(!).

4. Buffering - the number of messages that can reside in a link temporarily

  • Zero capacity - queue length 0 »sender must wait until receiver ready to take the message
  • Bounded capacity - finite length queue »messages can be queued as long as queue not full »otherwise sender will have to wait
  • Unbounded capacity »any number of messages can be queued - in virtual space? »sender never delayed

5. Producers – Binary semaphores : one message token –General (counting) semaphores : more than one message token –message blocks used to buffer data items –scheme uses two mailboxes »mayproduce and mayconsume

–producer :

»get a message block from mayproduce

»put data item in block

»send message to mayconsume


–consumer :

»get a message from mayconsume

»consume data in block

»return empty message block to mayproduce mailbox

Thursday, July 16, 2009

chapter 4

5.) interprocess communication - Inter-process communication
(IPC) is a set of techniques for the exchange of data among multiple threads in one or more processes. Processes may be running on one or more computers connected by a network. IPC techniques are divided into methods for message passing, synchronization, shared memory, and remote procedure calls (RPC). The method of IPC used may vary based on the bandwidth and latency of communication between the threads, and the type of data being communicated.

There are several reasons for providing an environment that allows process cooperation:

  • Information sharing
  • Computation speedup
  • Modularity
  • Convenience

4. cooperating process

a.) An independent process cannot affect or beaffected by the execution of another process.

b. ) A cooperating process can affect or be affected bythe execution of another process

c. ) Advantages of process cooperation

  • Information sharing
  • Computation speed-up
  • Modularity
  • Convenience

chapter 4

1.The concept of process

a.)process state- the stage of execution that a process is in. It is these states which determine which processes are eligible to receive CPU time.




  • since there is a single processor,at ant time,there will be at most one process running. We will call this the current proccess.

  • There may be several processess that are waiting to use the processor. These processess are sais to be ready.

  • There may be some that are waiting, not for the processor, dut for a different resource or event. These process said to be block.

b.)process control- The collection of attributes is refereed to as process control block.
A Process Control Block (PCB, also called Task Control Block or Task Struct) is a data structure in the operating system kernel containing the information needed to manage a particular process. The PCB is "the manifestation of a process in an operating system".

Included information

Implementations differ, but in general a PCB will include, directly or indirectly:
  • The identifier of the process (a process identifier, or PID)
  • Register values for the process including, notably, the Program Counter value for the process
  • The address space for the process
  • Priority (in which higher priority process gets first preference. eg., nice value on Unix operating systems)
  • Process accounting information, such as when the process was last run, how much CPU time it has accumulated, etc.
  • Pointer to the next PCB i.e. pointer to the PCB of the next process to run
  • I/O Information (i.e. I/O devices allocated to this process, list of opened files, etc)

During a context switch, the running process is stopped and another process is given a chance to run. The kernel must stop the execution of the running process, copy out the values in hardware registers to its PCB, and update the hardware registers with the values from the PCB of the new process.

c.)threads-A thread (or lightweight process) is a basic unit of CPU utilization; it consists of:) program counter) register set ) stack spaceA thread shares with its peer threads its:) code section) data section) operating-system resourcescollectively know as a task.A traditional or heavyweight process is equal to a task with one thread


operating system - 4

2.Process scheduling

a.) scheduling queues - Queue scheduling mechanism in a data packet transmission system:::::
A queue scheduling mechanism in a data packet transmission system, the data packet transmission system including a transmission device for transmitting data packets, a reception device for receiving the data packets, a set of queue devices respectively associated with a set of priorities each defined by a priority rank for storing each data packet transmitted by the transmission device into the queue device corresponding to its priority rank, and a queue scheduler for reading, at each packet cycle, a packet in one of the queue devices determined by a normal priority preemption algorithm. The queue scheduling mechanism includes a credit device that provides at each packet cycle a value N defining the priority rank to be considered by the queue scheduler whereby a data packet is read by the queue scheduler from the queue device corresponding to the priority N instead of the queue device determined by the normal priority preemption algorithm

b.)schedulers - The scheduler is the component of an operating system that determines which process should be run, and when.

We will specify:

  • The service provided-the scheduler specification
  • A system that provides this services-the scheduler implementation

c.)context switch - Execution context is determined by the user or login connected to the session, or executing (calling) a module. It establishes the identity against which permissions to execute statements or perform actions are checked. In SQL Server, the execution context can be switched to another user or login by executing the EXECUTE AS statement, or specifying the EXECUTE AS clause in a module. After the context switch, SQL Server checks permissions against the login and user for that account instead of the person calling the EXECUTE AS statement or the module. The database user or SQL Server login is impersonated for the remainder of the session or module execution, or until the context switch is explicitly reverted. For more information about execution context, see Understanding Execution Context.


Explicit Context Switching

The execution context of a session or module can be explicitly changed by specifying a user or login name in an EXECUTE AS statement. The impersonation remains in effect until one of the following events occurs:

  • The session is dropped.
  • Context is switched to another login or user.
  • Context is reverted to the previous execution context.

OPERATING SYSTEM

3.Operation processess


a.) process creation - Process creation in UNIX
The seven-state logical process model we considered in a previous lecture can accommodate the UNIX process model with some modifications, actually becoming a ten state model.
First, as we previously observed, UNIX executes most kernel services within a process's context, by implementing a mechanism which separates between the two possible modes of execution of a process. Hence our previously unique ``Running'' state must actually be split in a ``User Running'' state and a ``Kernel Running'' state. Moreover a process preemption mechanism is usually implemented in the UNIX scheduler to enforce priority. This allows a process returning from a system call (hence after having run in kernel mode) to be immediately blocked and put in the ready processes queue instead of returning to user mode running, leaving the CPU to another process. So it's worth considering a ``Preempted'' state as a special case of ``Blocked''. Moreover, among exited processes there's a distinction between those which have a parent process that waits for their completion (possibly to clean after them), and those which upon termination have an active parent that might decide to wait for them sometime in the future (and then be immediately notified of its children's termination) . These last processes are called ``Zombie'', while the others are ``Exited''. The difference is that the system needs to maintain an entry in the process table for a zombie, since its parent might reference it in the future, while the entry for an exited (and waited for) process can be discarded without further fiddling. So the much talked about ``Zombie'' processes of UNIX are nothing but entries in a system table, the system having already disposed of all the rest of their image. This process model is depicted in fig. 5.





b.)process termination - Processes terminate in one of two ways:

  • Normal Termination occurs by a return from main or when requested by an explicit call to exit or _exit.
  • Abnormal Termination occurs as the default action of a signal or when requested by abort

Thursday, July 9, 2009

quiz #3

1.) What are the major activities of OS with regards process management.

OS is responsible for the ff activities in connection with process management.
  1. process creation and deletion
  2. process suspension and resumption
  3. provision of mechanism for:
  • process management
  • process communication
  • deadlock handling (not responding)

2.)What are the major activities of OS with regards main memory.

Some of the activities in connection with memory managemnt that are handled by the OS.

  • keep track of w/c parts of the memory are currently beingused and by whom.
  • decide with processes to load when memory space becomes available.
  • allocate and deallocate memory space as needed

3.)What are the major activities of OS with regards with secondary storage memory.

The OS is responsible for hee ff activities in connection with disk management.

  • free space management
  • storage allocation
  • disk scheduling

4.)What are the major activities of OS with regards with file management.

OS is resposible for the ff activities in connections with file management

  • file creation and deletion
  • directory creation and deletion
  • suppuorts of premitives for manipulating files and directories
  • mapping files unto secondary storage
  • file back-up on stable (nonvolatile) storage media

5.)What is the purpose of the command interpreter.

Command interpreter serves as the interface between the user and OS

  • user friendly mouse based windows environment in the macintosh and in the microsoft windows
  • in MS-DOS and UNIX commands are type on the keyboard and displayed on a screen or printing termminal with the enter or return key indicating that a command is complete and ready to be executed.

Many commands are given to the OS by control statements w/c deal w/:

  • process creation and management
  • i/o handling
  • secondary strage management
  • main memory management
  • file system access
  • protection
  • networking

Tuesday, July 7, 2009

VIRTUAL MACHINE

VIRTUAL MACHINE - A virtualvirtualvirtual machinemachinemachine is a type of computer application used to create a virtualvirtualvirtual environment, which is referred to as virtualization. Virtualization allows the user to see the infrastructure of a network through a process of aggregation. Virtualization may also be used to run multiple operating systems at the same time. Through the help of a virtualvirtualvirtual machinemachinemachine, the user can operate software located on the computer platform.


    IMPLEMENTETION

"The concept of the virtual machine is one of the most important concepts in computer science today. Emulators use virtual machines, operating systems use virtual machines (Microsoft's .NET), and programming languages use virtual machines (Perl, Java)". Read on for his review of Virtual Machine Design and Implementation in C/C++, an attempt to examine and explain virtual machines and the concepts which allow them to exist.




Top Ten BEnEFITs

1.)Designed for virtual machines running on Windows Server 2008 and Microsoft Hyper-V ServerHyper-V is the next-generation hypervisor-based virtualization platform from Microsoft, which is designed to offer high performance, enhanced security, high availability, scalability, and many other improvements. VMM is designed to take full advantage of these foundational benefits through a powerful yet easy-to-use console that streamlines many of the tasks necessary to manage virtualized infrastructure. Even better, administrators can manage their traditional physical servers right alongside their virtual resources through one unified console.



2.)Support for Microsoft Virtual Server and VMware ESXWith this release, VMM now manages VMware ESX virtualized infrastructure in conjunction with the Virtual Center product. Now administrators running multiple virtualization platforms can rely on one tool to manage virtually everything. With its compatibility with VMware VI3 (through Virtual Center), VMM now supports features such as VMotion and can also provide VMM-specific features like Intelligent Placement to VMware servers.



3.)Performance and Resource Optimization (PRO) Performance and Resource Optimization (PRO) enables the dynamic management of virtual resources though Management Packs that are PRO enabled. Utilizing the deep monitoring capabilities of System Center Operations Manager 2007, PRO enables administrators to establish remedial actions for VMM to execute if poor performance or pending hardware failures are identified in hardware, operating systems, or applications. As an open and extensible platform, PRO encourages partners to design custom management packs that promote compatibility of their products and solutions with PRO’s powerful management capabilities.



4.)Maximize datacenter resources through consolidation A typical physical server in the datacenter operates at only 5 to 15 percent CPU capacity. VMM can assess and then consolidate suitable server workloads onto virtual machine host infrastructure, thus freeing up physical resources for repurposing or hardware retirement. Through physical server consolidation, continued datacenter growth is less constrained by space, electrical, and cooling requirements.



5.)Machine conversions are a snap! Converting a physical machine to a virtual one can be a daunting undertaking—slow, problematic, and typically requiring you to halt the physical server. But thanks to the enhanced P2V conversion in VMM, P2V conversions will become routine. Similarly, VMM also provides a straightforward wizard that can convert VMware virtual machines to VHDs through an easy and speedy Virtual-to-Virtual (V2V) transfer process.


6.)Quick provisioning of new machines In response to new server requests, a truly agile IT department delivers new servers to its business clients anywhere in the network infrastructure with a very quick turnaround. VMM enables this agility by providing IT administrators with the ability to deploy virtual machines in a fraction of the time it would take to deploy a physical server. Through one console, VMM allows administrators to manage and monitor virtual machines and hosts to ensure they are meeting the needs of the corresponding business groups.


7.)Intelligent Placement minimizes virtual machine guesswork in deployment VMM does extensive data analysis on a number of factors before recommending which physical server should host a given virtual workload. This is especially critical when administrators are determining how to place several virtual workloads on the same host machine. With access to historical data—provided by Operations Manager 2007—the Intelligent Placement process is able to factor in past performance characteristics to ensure the best possible match between the virtual machine and its host hardware.


8.)Delegated virtual machine management for Development and Test Virtual infrastructures are commonly used in Test and Development environments, where there is constant provisioning and tear down of virtual machines for testing purposes. This latest version of VMM features a thoroughly reworked and improved self-service Web portal, through which administrators can delegate this provisioning role to authorized users while maintaining precise control over the management of virtual machines.


9.)The library helps keep virtual machine components organized To keep a data center’s virtual house in order, VMM provides a centralized library to store various virtual machine “building blocks”—off-line machines and other virtualization components. With the library’s easy-to-use structured format, IT administrators can quickly find and reuse specific components, thus remaining highly productive and responsive to new server requests and modifications.


10.)Windows PowerShell provides rich management and scripting environment The entire VMM application is built on the command-line and scripting environment, Windows PowerShell. This version of VMM adds additional PowerShell commandlets and “view script” controls, which allow administrators to exploit customizing or automating operations at an unprecedented




EXAMPLE


SYSTEM GENERATION

  • System definition: The necessary application and z/TPF system knowledge required to select the hardware configuration and related values used by thez/TPF system software.
  • ocess of creating the z/TPF system tables and configuration-dependent system software.

  • System restart and switchover: The procedures used by the z/TPF system software to ready the configuration for online use.

SYSTEM BOOT

The typical computer system boots over and over again with no problems, starting the computer's operating system (OS) and identifying its hardware and software components that all work together to provide the user with the complete computing experience. But what happens between the time that the user powers up the computer and when the GUI icons appear on the desktop?
In order for a computer to successfully boot, its
BIOS, operating system and hardware components must all be working properly; failure of any one of these three elements will likely result in a failed boot sequence.

Thursday, July 2, 2009

CHAPTER 3: OPERATING SYSTEM STRUCTURE

system components

  • operating system process management -

    Process: a program in execution

    Keeps track of each process and it’s state

    Create, delete, suspend, resume processes; synchronize process communications, handle deadlocks

    Possibly support threads (executable parts of a process)

  • main memory management- Memory is the electronic holding place for instructions and data that the computer's microprocessor can reach quickly. When the computer is in normal operation, its memory usually contains the main parts of the operating system and some or all of the application programs and related data that are being used. Memory is often used as a shorter synonym for random access memory (RAM). This kind of memory is located on one or more microchips that are physically close to the microprocessor in the computer. Most desktop and notebook computers sold today include at least 16 megabytes of RAM, and are upgradeable to include more. The more RAM you have, the less frequently the computer has to access instructions and data from the more slowly accessed hard disk form of storage.

  • file management - The term computer file management refers to the manipulation of [document]s and [data] in [Computer file|file]s on a [computer]].Specifically, one may create a new file or edit an existing file and save it; open or load a pre-existing file into memory; or close a file without saving it. Additionally, one may group related files in directories. These tasks are accomplished in different ways in different operating systems and depend on the user interface design and, to some extent, the storage medium being used.

  • i/o system management- Input/Output Inc. endured another set of management changes this week, two months after the departures of its top executive.

    But these changes are on the second tier executive level and mark the positioning of the company for participation in the industry's recovery expected next year, say analysts.

    Axel Sigmar will become executive vice president and chief technology officer of I/O. He will also be the president of subsidiary I/O Technologies where he will focus on new generation applications and opportunities for technologies generated within the company.

  • secondary amanagement - Secondary storage management is a classical feature of database management systems. It is usually supported through a set of mechanisms. These include index management, data clustering, data buffering, access path selection and query optimization.
    None of these is visible to the user: they are simply performance features. However, they are so critical in terms of performance that their absence will keep the system from performing some tasks (simply because they take too much time). The important point is that they be invisible. The application programmer should not have to write code to maintain indices, to allocate disk storage, or to move data between disk and main memory. Thus, there should be a clear independence between the logical and the physical level of the system.

  • protection management-

    Provide mechanism for controlling access to programs, processes, or users

    Essential in multitasking and multiuser systems

  • command interpreter system - A hardware accelerated I/O data processing engine to execute a minimum number of types of I/O data processing commands in response to a stimulus from a host computer. The data processing engine, referred to as a command interpreter includes a command queue, a logic unit, a multiple purpose interface, at least one memory, and a controlling state machine, that each operate in concert with each other and without software control. The types of commands executed by the command interpreter can include, but are not limited to, an Initialize, Copy, DMA Read, DMA Write, Cumulative Exclusive OR, Verify, Compare, and ECC Check. The execution of commands that specify a source data location and a destination data location are characterized by a plurality of reads to an internal cache from the source data location for each bulk write from the internal cache to the destination data location. The locations of the data operated on by the command interpreter include a local I/O controller memory and a non-local I/O controller memory accessible to the command interpreter by way of an I/O bus.

operating system services



operating system services - Operating systems are responsible for providing essential services within a computer system:
Initial loading of programs and transfer of programs between secondary storage and main memory
Supervision of the input/output devices
File management
Protection facilities

system calls



file managementsystem

  • process control


    Also referred to as simply a file system or filesystem. The system that an operating system or program uses to organize and keep track of files. For example, a hierarchical file system is one that uses directories to organize files into a tree structure.
    Although the operating system provides its own file management system, you can buy separate file management systems. These systems interact smoothly with the operating system but provide more features, such as improved backup procedures and stricter file protection.

  • file management -

    Keeps tracks of available space on the system

    Maintains directory structure and hierarchy

    Supports file manipulation commands

    Keeps track of file information (inode, name, timestamp)


  • device management - Device Management is a set of technologies, protocols and standards used to allow the remote management of mobile devices, often involving updates of firmware over the air (FOTA). The network operator, handset OEM or in some cases even the end-user (usually via a web portal) can use Device Management, also known as Mobile Device Management, or MDM, to update the handset firmware/OS, install applications and fix bugs, all over the air. Thus, large numbers of devices can be managed with single commands and the end-user is freed from the requirement to take the phone to a shop or service center to refresh or update.
    For companies, a Device Management system means better control and safety as well as increased efficiency, decreasing the possibility for device downtime. As the number of smart devices increases in many companies today, there is a demand for managing, controlling and updating these devices in an effective way. As mobile devices have become true computers over the years, they also force organizations to manage them properly. Without proper management and security policies, mobile devices pose threat to security: they contain lots of information, while they may easily get into wrong hands. Normally an employee would need to visit the IT / Telecom department in order to do an update on the device. With a Device Management system, that is no longer the issue. Updates can easily be done "over the air". The content on a lost or stolen device can also easily be removed by "wipe" operations. In that way sensitive documents on a lost or a stolen device do not arrive in the hands of others.

  • imformation maintenance

    Product information may include all sorts of different data:

  • Basic descriptions
  • Selling features
  • Technical specifications
  • Pricing
  • Photos
  • Logos
  • Diagrams
  • etc

And each of these may have many variations. Some are used for brochures, some in price lists. Sometimes the same wording is required, sometimes different. One person may be arranging a new brochure, another updating the website. How do you ensure consistency and avoid duplication?

Centralised data maintenance

The answer is to manage all of the data from one place. The data can exist in different forms and even be located in different places but is controlled by one system—infobank.

infobank provides the best of both worlds: distributed data entry with centralised control. Individual users can be dispersed over your network or connected over the Internet.