Operating system 2


                                    Tosinrogue!! A blogger

OPERATING SYSTEM
An operating system (OS) can be defined as a set of computer programs that manage the hardware and software resources of a computer. An operating system maintains a proper balance between the software and hardware present in a computer system. But simply put, an (OS) can be defined as a suite (set) of programs implemented either in software or firmware (hardwired instructions on chips usually in ROM) or both that makes the hardware usable.
At the foundation of all system software, an operating system performs basic tasks such as controlling and allocating memory, prioritizing system requests, controlling input and output devices, facilitating networking and managing file systems. The operating system forms a platform for other system software and for application software.
Windows, Linux, and Mac OS are some of the most popular OS's.
 










Goals and Functions of OS
OS can be defined by what they do i.e. by their functions, goals and objectives. Some of the goals of the OS are:
 1 Convenience for the User
This is one of the major goals of the OS. OS exists because it is easier to compute with them than without them therefore it can be said that it makes computer easier to use.
2 Efficiency
 An OS allows computer system resources to be used in an efficient manner. This particularly important for large shared multi-user systems which are usually expensive. In the past, the efficiency (i.e. optimal use of the computer resources) considerations were often more important than convenience.
3 Evolutionary Capabilities
Ability to evolve also happens to be one of the goals of the OS. An OS should be constructed in such a way as to permit the effective development, testing and introduction of new system functions without interfering with its service.
Services Provided by the OS
The services provided by the OS can be categorised into two:
1 Convenience for the Programmer/User
The conveniences offered the user are in diverse and following ways:
i. Program Creation: Although editors and debuggers are not part of the OS, they are accessed through the OS to assist programmers in creating programs.
ii. Program Execution: OS ensures that programs are loaded into the main memory. I/O devices and files are initialised and other resources are prepared. The program must be able to end its execution either normally or abnormally. In case of abnormal end to a program, it must indicate error.
iii. Access to I/O Devices: Each I/O device requires its own set of instructions or control signal for operation. The OS takes care of the details so that the programmer can think in terms of reads and writes.
iv. Controlled Access: In the case of files, control includes an understanding of the nature of the I/O device (e.g. diskette drive, CDROM drive, etc.) as well as the file format of the storage medium. The OS deals with these details. In the case of the multi-user system, the OS must provide protection mechanisms to control access to the files.
v. Communications: There are many instances in which a process needs to exchange information with another process. There are two major ways in which communication can occur:
• It can take place between processes executing on the same computer.
• It can take place between processes executing on different computer systems that are linked by a computer network.
• Communications may be implemented via a shared memory or by a technique of message passing in which packets of information are moved between processes by the OS.
vi. Error Detection: A variety of errors can occur while a computer system is running. These errors include:
• CPU and memory hardware error: This encompasses memory error, power failure, a device failure such as connection failure on a network, lack of paper in printer.
• Software errors: Arithmetic overflow, attempt to access forbidden memory locations, inability of the OS to grant the request of an application.
In each case, the OS must make a response that makes the less impact on running applications. The response may range from ending the program that caused the error, retrying the operation or simply reporting the error to the application.
2 Efficiency of System: Single and Multi-User
In the area of system efficiency, the OS offer the following services:
i. System Access or Protection: In the case of a shared or public system, the OS controls access to the system and to specific system resources by ensuring that each user authenticates him/herself to the system, usually by means of passwords to be allowed access to system resources. It extends to defending external I/O devices including modems, network adapters from invalid access attempts and to recording all such connections for detection of break-ins.

ii. Resources Allocation: In an environment where there multiple users or multiple jobs running at the same time, resources must be allocated to each of them. Many different types of resources are managed by the OS. Some (such as CPU cycles, main memory and file storage) may have general request and release codes. For instances, in determining how best to use the CPU, the OS have CPU-scheduling routines that take into account the speed of the CPU, the jobs that must be executed, the number of registers available and other factors. These routines may also be used to allocate plotters, modems and other peripheral devices.

iii. Accounting: This helps to keep track of how much of and what types of computer resources are used by each user. Today, this record keeping is not for billing purposes but for simply accumulating usage statistics. This statistics may be available tool for researchers who want to reconfigure the system to improve computing services.

iv. Ease of Evolution of OS: A major OS will evolve over time for a number of reasons such as hardware upgrades and new types of hardware e.g. The use of graphics terminals may affect OS design. This is because such a terminal may allow the user to view several applications at the same time through ‘windows’ on the screen. This requires more sophisticated support in the OS.

v. New Services: In response to user demands or the need of system managers, the OS may expand to offer new services.

vi. Fixes: The OS may have faults which may be discovered over the course of time and fixes will need to be made.
Other features provided by the OS includes:
• Defining the user interface
• Sharing hardware among users
• Allowing users to share data
• Scheduling resources among users
• Facilitating I/O
• Recovering from errors
• Etc.

Kernel: In computer science, the kernel is the central component of most computer operating systems (OS). Its responsibilities include managing the system's resources and the communication between hardware and software components.
These tasks are done differently by different kernels, depending on their design and implementation. While monolithic kernels will try to achieve these goals by executing all the code in the same address space to increase the performance of the system, microkernels run most of their services in user space, aiming to improve maintainability and modularity of the codebase. A range of possibilities exists between these two extremes
Attempting to communicate with the kernel in its own language would be extremely complicated and frustrating. This is where the shell comes in. The shell is basically an interpreter that understands commands in something resembling common English and translates those commands into a language the kernel understands. The primary role of the shell is to provide an interface through which the user can interact with the kernel. The kernel also accepts messages from the shell and displays them in a language the user can understand.
Kernel applications memory devices CPU


The user types commands to the shell, which communicates with the kernel, and the kernel in turn communicates with the hardware. The kernel is the heart of the operating system. The shell consists of command interpreters or programming languages.
HARDWARE, KERNEL, SHELL, COMMAND, USER
OPERATING SYSTEM
Kernel Basic Responsibilities
The kernel's primary purpose is to manage the computer's resources and allow other programs to run and use these resources. Typically, the resources consist of:
• The CPU (frequently called the processor). This is the most central part of a computer system, responsible for running or executing programs on it. The kernel takes responsibility for deciding at any time which of the many running programs should be allocated to the processor or processors.
• The computer's memory. Memory is used to store both program instructions and data. Typically, both need to be present in memory in order for a program to execute. Often multiple programs will want access to memory, frequently demanding more memory than the computer has available. The kernel is responsible for deciding which memory each process can use, and determining what to do when not enough is available.
• Any Input/Output (I/O) devices present in the computer, such as disk drives, printers, displays, etc. The kernel allocates requests from applications to perform I/O to an appropriate device (or subsection of a device, in the case of files on a disk or windows on a display) and provides convenient methods for using the device (typically abstracted to the point where the application does not need to know implementation details of the device).

Kernels also usually provide methods for synchronization and communication between processes (called inter-process communication or IPC).
Process management
 The main task of a kernel is to allow the execution of applications and support them with features such as hardware abstractions. To run an application, a kernel typically sets up an address space for the application, loads the file containing the application's code into memory (perhaps via demand paging), sets up a stack for the program and branches to a given location inside the program, thus starting its execution.
Memory management
The kernel has full access to the system's memory and must allow processes to access this memory safely as they require it.
Device management
To perform useful functions, processes need access to the peripherals connected to the computer, which are controlled by the kernel through device drivers.
System call
System calls provide an interface between a running program (process) and the operating system. System calls allow user-level processes to request some services from the operating system which the process itself is not allowed to do.
Types of OS
OS can be categorised in different ways based on perspectives. Some of the major ways in which the OS can be classified are explored and introduced in this unit.
A. Types of Operating Systems Based on the Types of Computer they Control and the Sort of Applications they Support
Based on the types of computers they control and the sort of applications they support, there are generally four types within the broad family of operating systems. The broad categories are as follows:
 1 Real-Time Operating Systems (RTOS)
They are used to control machinery, scientific instruments and industrial systems. An RTOS typically has very little user-interface capability, and no end-user utilities, since the system will be a sealed box when delivered for use. A very important part of an RTOS is managing the resources of the computer so that a particular operation executes in precisely the same amount of time every time it occurs. In a complex machine, having a part move more quickly just because system resources are available may be just as catastrophic as having it not move at all because the system is busy. RTOS can be hard or soft. A hard RTOS guarantees that critical tasks are performed on time. However, soft RTOS is less restrictive. Here, a critical real-time task gets priority over other tasks and retains that priority until it completes.
2 Single-User, Single-Tasking Operating System
As the name implies, this operating system is designed to manage the computer so that one user can effectively do one thing at a time. The Palm OS for Palm handheld computers is a good example of a modern single-user, single-task operating system.
3 Single-User, Multi-Tasking Operating System
This is the type of operating system most people use on their desktop and laptop computers today. Windows 98 and the Mac O.S. are both examples of an operating system that will let a single user have several programs in operation at the same time.
4 Multi-User Operating Systems A multi-user operating system allows many different users to take advantage of the computer's resources simultaneously. The operating system must make sure that the requirements of the various users are balanced, and that each of the programs they are using has sufficient and separate resources so that a problem with one user doesn't affect the entire community of users. Unix, VMS, and mainframe operating systems, such as MVS, are examples of multi-user operating systems. It's important to differentiate here between multi-user operating systems and single-user operating systems that support networking. Windows 2000 and Novell Netware can each support hundreds or thousands of networked users, but the operating systems themselves are not true multi-user operating systems. The system administrator is the only user for Windows 2000 or Netware. The network support and the entire remote user logins the network enables are, in the overall plan of the operating system, a program being run by the administrative user.

2 Types of OS based on the Nature of Interaction that takes place between the Computer User and His/Her Program during its Processing
Modern computer operating systems may be classified into three groups, which are distinguished by the nature of interaction that takes place between the computer user and his or her program during its processing. The three groups are: called batch, time-shared and real time operating systems.
1 Batch Processing OS
In a batch processing operating system environment, users submit jobs to a central place where these jobs are collected into a batch, and subsequently placed on an input queue at the computer where they will be run. In this case, the user has no interaction with the job during its processing, and the computer’s response time is the turnaround time (i.e. results are ready for return to the person who submitted the job).
2 Time Sharing OS
 Another mode for delivering computing services is provided by time sharing operating systems. In this environment a computer provides computing services to several or many users concurrently on-line. Here, the various users are sharing the central processor, the memory, and other resources of the computer system in a manner facilitated, controlled, and monitored by the operating system. The user, in this environment, has nearly full interaction with the program during its execution, and the computer’s response time may be expected to be no more than a few second.
3 Real Time OS
 The third class of operating systems, real time operating systems, are designed to service those applications where response time is of the essence in order to prevent error, misrepresentation or even disaster. Examples of real time operating systems are those which handle airlines reservations, machine tool control, and monitoring of a nuclear power station. The systems, in this case, are designed to be interrupted by external signal that require the immediate attention of the computer system. In fact, many computer operating systems are hybrids, providing for more than one of these types of computing service simultaneously. It is especially common to have a background batch system running in conjunction with one of the other two on the same computer
Other Types of OS based on the Definition of the System/Environment
 A number of other definitions are important to gaining a better understanding and subsequently classifying operating systems:
1 Multiprogramming Operating System
A multiprogramming operating system is a system that allows more than one active user program (or part of user program) to be stored in main memory simultaneously.
Thus, it is evident that a time-sharing system is a multiprogramming system, but note that a multiprogramming system is not necessarily a time-sharing system. A batch or real time operating system could, and indeed usually does, have more than one active user program simultaneously in main storage. Another important, and all too similar, term is ‘multiprocessing’. A multiprocessing system is a computer hardware configuration that includes more than one independent processing unit. The term multiprocessing is generally used to refer to large computer hardware complexes found in major scientific or commercial applications.
2 Network Operating Systems
A networked computing system is a collection of physical interconnected computers. The operating system of each of the interconnected computers must contain, in addition to its own stand-alone functionality, provisions for handling communication and transfer of programs and data among the other computers with which it is connected. In a network operating system, the users are aware of the existence of multiple computers, and can log in to remote machines and copy files from one machine to another. Each machine runs its own local operating system and has its own user (or users). Network operating systems are designed with more complex functional capabilities. Network operating systems are not fundamentally different from single processor operating systems. They obviously need a network interface controller and some low-level software to drive it, as well as programs to achieve remote login and remote files access, but these additions do not change the essential structure of the operating systems.
3 Distributed Operating Systems
 A distributed computing system consists of a number of computers that are connected and managed so that they automatically share the job processing load among the constituent computers, or separate the job load as appropriate particularly configured processors. Such a system requires an operating system which, in addition to the typical stand-alone functionality, provides coordination of the operations and information flow among the component computers. The distributed computing environment and its operating systems, like networking environment, are designed with more complex functional capabilities. However, a distributed operating system, in contrast to a network operating system, is one that appears to its users as a traditional uniprocessor system, even though it is actually composed of multiple processors. In a true distributed system, users should not be aware of where their programs are being run or where their files are located; that should all be handled automatically and efficiently by the operating system.
Design Philosophies
Two basic designs exist:
• Event-driven (priority scheduling) designs switch tasks only when an event of higher priority needs service, called preemptive priority.
• Time-sharing designs switch tasks on a clock interrupt, and on events, called round-robin.

Time-sharing designs switch tasks more often than is strictly needed, but give smoother, more deterministic multitasking, the illusion that a process or user has sole use of a machine.
Scheduling
In typical designs, a task has three states:
1) Running 2) Ready 3) Blocked.
Most tasks are blocked, most of the time. Only one task per CPU is running. In simpler systems, the ready list is usually short, two or three tasks at most. The real key is designing the scheduler. Usually the data structure of the ready list in the scheduler is designed to minimize the worst-case length of time spent in the scheduler's critical section, during which pre emption is inhibited, and, in some cases, all interrupts are disabled. But, the choice of data structure depends also on the maximum number of tasks that can be on the ready list (or ready queue).
In more advanced real-time systems, real-time tasks share computing resources with many non-real-time tasks, and the ready list can be arbitrarily long. In such systems, a scheduler ready list implemented as a linked list would be inadequate.
Intertask Communication and Resource Sharing
A significant problem that multitasking systems must address is sharing data and hardware resources among multiple tasks. It is usually "unsafe" for two tasks to access the same specific data or hardware resource simultaneously. ("Unsafe" means the results are inconsistent or unpredictable, particularly when one task is in the midst of changing a data collection. The view by another task is best done either before any change begins, or after changes are completely finished.) There are three common approaches to resolve this problem:
Temporarily masking/disabling interrupts
Binary semaphores
Message passing
General-purpose operating systems usually do not allow user programs to mask (disable) interrupts, because the user program could control the CPU for as long as it wished. Modern CPUs make the interrupt disable control bit (or instruction) inaccessible in user mode to allow operating systems to prevent user tasks from doing this. Many embedded systems and RTOSs, however, allow the application itself to run in kernel mode for greater system call efficiency and also to permit the application to have greater control of the operating environment without requiring OS intervention.
On single-processor systems, if the application runs in kernel mode and can mask interrupts, often that is the best (lowest overhead) solution to preventing simultaneous access to a shared resource. While interrupts are masked, the current task has exclusive use of the CPU; no other task or interrupt can take control, so the critical section is effectively protected. When the task exits its critical section, it must unmask interrupts; pending interrupts, if any, will then execute.
A binary semaphore is either locked or unlocked. When it is locked, a queue of tasks can wait for the semaphore. Typically a task can set a timeout on its wait for a semaphore. Problems with semaphore based designs are well known: priority inversion and deadlocks. In priority inversion, a high priority task waits because a low priority task has a semaphore. A typical solution is to have the task that has a semaphore run at (inherit) the priority of the highest waiting task. But this simplistic approach fails when there are multiple levels of waiting (A waits for a binary semaphore locked by B, which waits for a binary semaphore locked by C). Handling multiple levels of inheritance without introducing instability in cycles is not straightforward. In a deadlock, two or more tasks lock a number of binary semaphores and then wait forever (no timeout) for other binary semaphores, creating a cyclic dependency graph. The simplest deadlock scenario occurs when two tasks lock two semaphores in lockstep, but in the opposite order. Deadlock is usually prevented by careful design, or by having floored semaphores (which pass control of a semaphore to the higher priority task on defined conditions).
The other approach to resource sharing is for tasks to send messages. In this paradigm, the resource is managed directly by only one task; when another task wants to interrogate or manipulate the resource, it sends a message to the managing task. This paradigm suffers from similar problems as binary semaphores: Priority inversion occurs when a task is working on a low-priority message, and ignores a higher-priority message (or a message originating indirectly from a high priority task) in its in-box. Protocol deadlocks occur when two or more tasks wait for each other to send response messages. Although their real-time behaviour is less crisp than semaphore systems, simple message-based systems usually do not have protocol deadlock hazards, and are generally better-behaved than semaphore systems.
Interrupt Handlers and the Scheduler
Since an interrupt handler blocks the highest priority task from running, and since real time operating systems are designed to keep thread latency to a minimum, interrupt handlers are typically kept as short as possible. The interrupt handler defers all interaction with the hardware as long as possible; typically all that is necessary is to acknowledge or disable the interrupt (so that it won't occur again when the interrupt handler returns). The interrupt handler then queues work to be done at a lower priority level, often by unblocking a driver task (through releasing a semaphore or sending a message). The scheduler often provides the ability to unblock a task from interrupt handler.
Memory Allocation
Memory allocation is even more critical in an RTOS than in other operating systems. Firstly, speed of allocation is important. A standard memory allocation scheme scans a linked list of indeterminate length to find a suitable free memory block; however, this is unacceptable as memory allocation has to occur in a fixed time in an RTOS.
Secondly, memory can become fragmented as free regions become separated by regions that are in use. This can cause a program to stall, unable to get memory, even though there is theoretically enough available. Memory allocation algorithms that slowly accumulate fragmentation may work fine for desktop machines—when rebooted every month or so—but are unacceptable for embedded systems that often run for years without rebooting.
The simple fixed-size-blocks algorithm works astonishingly well for simple embedded systems
 Object-Oriented Operating System
An object-oriented operating system is an operating system which internally uses object-oriented methodologies.
An object-oriented operating system is in contrast to an object-oriented user interface or programming framework, which can be placed above a non-object-oriented operating system like DOS, Microsoft Windows or  Unix.
It can be argued, however, that there are already object-oriented concepts involved in the design of a more typical operating system such as Unix. While a more traditional language like C does not support object orientation as fluidly as more recent languages, the notion, for example, of a file, stream, or device driver (in Unix, each represented as a file descriptor) can be considered a good example of object orientation: they are, after all, abstract data types, with various methods in the form of system calls, whose behavior varies based on the type of object, whose implementation details are hidden from the caller, and might even use inheritance in their underlying code.
Time-Sharing
Time-sharing refers to sharing a computing resource among many users by multitasking.

Comments

Popular posts from this blog

Information and communication science