The Function of Operating Systems
4.1.1 The Main Features of Operating Systems
The operating system (OS) must provide and manage hardware resources as well as
provide an interface between the user and the machine and between applications software
and the machine. The OS must also provide other services such as data security.
Originally, if a program needed input, the program would have to contain the code to do
this, similarly if output were required. This led to duplication of code so the idea of an OS
was born. The OS contained the necessary input and output functions that could be called
by an application. Similarly, disk input and output routines were incorporated into the OS.
This led to the creation of subroutines to do these simple tasks such as read a character
from a keyboard or send a character to a printer. The joining together of all these basic
input and output routines led to the input-output control system (IOCS). Originally, the
IOCS could only read a punched card or send data to a card punch. However, as new input
and output media, such as magnetic tape and disk, were developed the IOCS became more
Another complication was added when assembly and high-level languages were developed
as the machine did not use these languages. Machines use binary codes for very simple
instructions. With the development of these new types of programming language a
computer would have to
load an assembler, compiler or interpreter,
load the assembly or high-level program,
do the translation,
store the results somewhere in memory,
execute the program,
read input data,
output the results.
For a user to organise all this had now become too complex. Also, as the processor could
work much faster than the manual operator and the input and output devices, much time
Further, to make full use of the processor, more than one program should be stored in
memory and the processor should give time to each of the programs. Suppose two
programs are stored in memory and, if one is using an input or output device (both very
slow compared to the processor), it makes sense for the other program to use the
processor. In fact this can be extended to more than two programs as shown in Fig.
The OS must now manage the memory so that all three programs shown in Fig. 22.214.171.124 are
kept separate as well as any data that they use. It must also schedule the jobs in the
sequence that makes best use of the processor.
The I/O phase should not hold up the processor too much which can easily happen if the
I/O devices are very slow, like a keyboard or printer. This can be overcome by using
Simultaneous Peripheral Operations On-Line (spooling). The idea is to store all input and
output on a high-speed device such as a disk. Fig. 126.96.36.199 shows how this may be achieved,
Another problem is that programs may not be loaded into the same memory locations each
time they are loaded. For example, suppose that three programs are loaded in the order A,
B, C on one occasion and in the order C, A, B on another occasion. The results are shown
in Fig. 188.8.131.52.
A further problem occurs if two or more users wish to use the same program at the same
time. For example, suppose user X and user Y both wish to use a compiler for C++ at the
same time. Clearly it is a waste of memory if two copies of the compiler have to be loaded
into main memory at the same time. It would make much more sense if user X's program
and user Y's program are stored in main memory together with a single copy of the
compiler as shown in Fig. 184.108.40.206.
program and data
Now the two users can use the compiler in turns and will want to use different parts of the
compiler. Also note that there are two different sets of data for the compiler, user X's
program and user Y's program. These two sets of data and the outputs from the compiler
for the two programs must be kept separate. Programs such as this compiler, working in
the way described, are called re-entrant.
Memory management, scheduling and spooling are described in more detail in the following
The simplest way of obeying instructions is shown in Fig. 220.127.116.11.
Any more instructions?
This is satisfactory so long as nothing goes wrong. Unfortunately things do go wrong and
sometimes the normal order of operation needs to be changed. For example, a user has
used up all the time allocated to his use of the processor. This change in order is instigated
by interrupts. The nature of each of these interrupts is
o Generated by an I/O device to signal a job is complete or an error has occurred. E.g.
printer is out of paper or is not connected.
o Generated by an internal clock indicating that the processor must attend to time critical
activities (see scheduling later).
o For example, power failure which indicates that the OS must close down as safely as
o Generated due to an error in a program such as violation of memory use (trying to use
part of the memory reserved by the OS for other use) or an attempt to execute an invalid
instruction (such as division by zero).
If the OS is to manage interrupts, the sequence in Fig. 18.104.22.168 needs to be modified as
shown in Fig. 22.214.171.124.
Is there an interrupt?
Service the interrupt
Any more instructions?
This diagram shows that, after the execution of an instruction, the OS must see if an
interrupt has occurred. If one has occurred, the OS must service the interrupt if it is more
important than the task already being carried out (see priorities later). This involves
obeying a new set of instructions. The real problem is 'how can the OS arrange for the
interrupted program to resume from exactly where it left off?'. In order to do this the
contents of all the registers in the processor must be saved so that the OS can use them to
service the interrupt. Chapter 4.3 explains registers that have to have their contents stored
as well as explaining the fetch/execute cycle in more detail.
Another problem the OS has to deal with happens if an interrupt occurs while another
interrupt is being serviced. There are several ways of dealing with this but the simplest is to
place the interrupts in a queue and only allow return to the originally interrupted program
when the queue is empty. Alternative systems are explained in Section 4.1.3. Taking the
simplest case, the order of processing is shown in Fig.126.96.36.199.
Is there an interrupt
in the interrupt queue?
Service the next interrupt
in the interrupt queue
The queue of interrupts is the normal first in first out (FIFO) queue and holds indicators to
the next interrupt that needs to be serviced.
One of the tasks of the OS is to arrange the jobs that need to be done into an appropriate
order. The order may be chosen to ensure that maximum use is made of the processor;
another order may make one job more important than another. In the latter case the OS
makes use of priorities.
Suppose the processor is required by program A, which is printing wage slips for the
employees of a large company, and by program B, which is analysing the annual,
world-wide sales of the company which has a turnover of many millions of pounds.
Program A makes little use of the processor and is said to be I/O bound. Program B makes
a great deal of use of the processor and is said to be processor bound.
If program B has priority over program A for use of the processor, it could be a long time
before program A can print any wage slips. This is shown in Fig. 188.8.131.52.
Fig. 184.108.40.206 shows what happens if A is given priority over B for use of the processor. This
shows that the I/O bound program can still run in a reasonable time and much better
throughput is achieved.
The objectives of scheduling are to
maximise the use of the whole of the computer system;
be fair to all users;
provide a reasonable response time to all users, whether they are on-line users or a batch
prevent the system failing if it is becoming overloaded;
make sure that the system is consistent by always giving similar response times to similar
activities from day to day.
To achieve these objectives some criteria are needed in order to determine the order in
which jobs are executed. The following is a list of criteria which may be used to determine a
schedule which will achieve the above objectives.
Priority. Give some jobs a greater priority than others when deciding which job should be
given access to the processor.
I/O or processor bound. If a processor bound job is given the main access to the processor
it could prevent the I/O devices being serviced efficiently.
Type of job. Batch processing, on-line and real-time jobs all require different response
Resource requirements. The amount of time needed to complete the job, the memory
required, I/O and processor time.
Resources used so far. The amount of processor time used so far, how much I/O used so
Waiting time. The time the job has been waiting to use the system.
In order to understand how scheduling is accomplished it is important to realise that any
job may be in one, and only one, of three states. A job may be ready to start, running on
the system or blocked because it is waiting for a peripheral, for example. Fig. 220.127.116.11 shows
how jobs may be moved from one state to another. Note that a job can only enter the
running state from the ready state. The ready and blocked states are queues that may hold
several jobs. On a standard single processor computer only one job can be in the running
state. Also, all jobs entering the system normally enter via the ready state and (normally)
only leave the system from the running state.
When entering the system a job is placed in the ready queue by the High Level Scheduler
(HLS). The HLS makes sure that the system is not over loaded.
Sometimes it is necessary to swap jobs between the main memory and backing store (see
Memory Management in Section 4.1.4). This is done by the Medium Level Scheduler (MLS).
Moving jobs in and out of the ready state is done by the Low Level Scheduler (LLS). The
LLS decides the order in which jobs are to be placed in the running state. There are many
policies that may be used to do scheduling, but they can all be placed in one of two classes.
These are pre-emptive and non-pre-emptive policies.
A pre-emptive scheme allows the LLS to remove a job from the running state so that
another job can be placed in the running state. In a non-pre-emptive scheme each job runs
until it no longer requires the processor. This may be because it has finished or because it
needs an I/O device.
Some common scheduling policies are
First Come First Served (FCFS)
Shortest Job First (SJF)
Round Robin (RR)
Shortest Remaining Time (SRT)
Multi-level Feedback Queues (MFQ)
and there are many more.
o simply means that the first job to enter the ready queue is the first to enter the running
state. This favours long jobs.
o simply means sort jobs in the ready queue in ascending order of times expected to be
needed by each job. New jobs are added to the queue in such a way as to preserve this
o this gives each job a maximum length of processor time (called a time slice) after which
the job is put at the back of the ready queue and the job at the front of the queue is given
use of the processor. If a job is completed before the maximum time is up it leaves the
o the ready queue is sorted on the amount of expected time still required by a job. This
scheme favours short jobs even more than SJF. Also there is a danger of long jobs being
prevented from running.
o involves several queues of different priorities with jobs migrating downwards.
There are other ways of allocating priorities. Safety critical jobs will be given very high
priority, on-line and real time applications will also have to have high priorities. For
example, a computer monitoring the temperature and pressure in a chemical process whilst
analysing results of readings taken over a period of time must give the high priority to the
control program. If the temperature or pressure goes out of a pre-defined range, the
control program must take over immediately. Similarly, if a bank's computer is printing
bank statements over night and someone wishes to use a cash point, the cash point job
must take priority. This scheme is shown in Fig. 18.104.22.168; this shows that queues are needed
for jobs with the same priority.
In this scheme, any job can only be given use of the processor if all the jobs at higher
levels have been completed. Also, if a job enters a queue that has a higher priority than the
queue from which the running program has come, the running program is placed back in
the queue from which it came and the job that has entered the higher priority queue is
placed in the running state.
Multi-level feedback queues work in a similar way except that each job is given a maximum
length of processor time. When this time is up, and the job is not completely finished, the
job is placed in the queue which has the next lower priority level. At the lowest level,
instead of a first in first out queue a round robin system is used. This is shown in Fig.
4.1.4 Memory Management
In order for a job to be able to use the processor the job must be stored in the computer's
main memory. If there are several jobs to be stored, they, and their data, must be
protected from the actions of other jobs.
Suppose jobs A, B, C and D require 50k, 20k, 10k and 30k of memory respectively and the
computer has a total of 130k available for jobs. (Remember the OS will require some
memory.) Fig. 22.214.171.124 shows one possible arrangement of the jobs.
Job D 30k
Job C 10k
Job B 20k
Job A 50k
Now suppose job C terminates and job E, requiring 25k of memory, is next in the ready
queue. Clearly job E cannot be loaded into the space that job C has relinquished. However,
there is 20k + 10 k = 30k of memory free in total. So the OS must find some way of using
it. One solution to the problem would be to move job D up to job B. This would make heavy
use of the processor as not only must all the instructions be moved but all addresses used
in the instructions would have to be recalculated.
When jobs are loaded into memory, they may not always occupy the same locations.
Supposing, instead of jobs A, B, C and D being needed and loaded in that order, it is
required to load jobs A, B, D and E in that order. Now job D occupies different locations in
memory to those shown above. So again there is a problem of using different addresses.
The OS has the task of both loading the jobs and adjusting the addresses. The loader does
both these tasks. The calculation of addresses can be done by recalculating each address
used in the instructions once the address of the first instruction is known. Alternatively,
relative addressing can be used. That is, addresses are specified relative to the first
Another problem to be solved is when to move jobs. Possible solutions are
whenever a job terminates;
when a new job is too large for any existing space;
at regular intervals;
when the user decides.
This system is known as variable partitioning with compaction. This is because the size of
the segments varies according to the sizes of the jobs and the 'holes' are removed by
compacting the jobs.
An alternative method is to divide both the memory and the jobs into fixed size units called
pages. As an example, suppose jobs A, B, C, D and E consist of 6, 4, 1, 3 and 2 pages
respectively. Also suppose that the available memory for jobs consists of 12 pages and
jobs A, B and C have been loaded into memory as shown in Fig. 126.96.36.199.
Job A Job B Job C Memory
Page 6 Page 4 Page 1 Free
Page 5 Page 3 C1
Page 4 Page 2 B4
Page 3 Page 1 B3
Page 2 B2
Page 1 B1
Now suppose job B terminates, releasing four pages, and jobs D and E are ready to be
loaded. Clearly we have a similar problem to that caused by variable partitioning. The 'hole'
consists of four pages into which job D (three pages) will fit, leaving one page plus the
original one page of free memory. E consists of two pages, so there is enough memory for
E but the pages are not contiguous and we have the situation shown in Fig. 188.8.131.52.
Job E Memory
Page 2 Free
Page 1 C1
The big difference between partitioning and paging is that jobs do not have to occupy
contiguous pages. Thus the solution is shown in Fig. 184.108.40.206.
The problem with paging is again address allocation. This can be overcome by keeping a
table that shows which memory pages are used for the job pages. Then, if each address
used in a job consists of a page number and the distance the required location is from the
start of the page, a suitable conversion is possible.
Suppose, in job A, an instruction refers to a location that is on page 5 and is 46 locations
from the start of page 5. This may be represented by
Now suppose we have the following table
Job Page Memory Page
We see that page A5 is stored in page 8 of memory, thus
5 46 becomes 8 46
Paging uses fixed length blocks of memory. An alternative is to use variable length blocks.
This method is called segmentation. In segmentation, programmers divide jobs into
segments, possibly of different sizes. Usually, the segments would consist of data, or
sub-routines or groups of related sub-routines.
Since segments may be of different lengths, address calculation has to be carefully
checked. The segment table must not only contain the start position of each segment but
also the size of each segment. This is needed to ensure that an address does not go out of
range. Fig. 220.127.116.11 shows how two jobs may be stored in memory. In this case the
programmer split Job A into 4 segments and Job B into 3 segments. These two jobs, when
loaded into memory, took up the positions shown in the Figure.
Job A Job B Memory
Segment A4 Free
Segment A2 A3
Segment A1 Segment B1 B1
Now suppose that an instruction specifies an address as segment 3, displacement (from
start of segment) 132. The OS will look up, in the process segment table, the basic address
(in memory) of segment 3. The OS checks that the displacement is not greater than the
segment size. If it is, an error is reported. Otherwise the displacement is added to the base
address to produce the actual address in memory to be used. The algorithm for this
Get segment number.
Use segment number to find length of segment from segment table.
If displacement is greater than segment size,
4.1 produce error message
Use segment number to find base address of segment from segment table.
Add displacement to base address to produce physical address.
This is also shown in Fig. 18.104.22.168.
Segment Number Displacement Physical Address
3 132 3632
Seg. No. Seg. Size Base Address
3 1500 3500
Paging and segmentation lead to another important technique called virtual memory. We
have seen that jobs can be loaded into memory when they are needed using a paging
technique. When a program is running, only those pages that contain code that is needed
need be loaded. For example, suppose a word processor has been written with page 1
containing the instructions to allow users to enter text and to alter the text. Also suppose
that page 2 contains the code for formatting characters, page 3 the code for formatting
paragraphs and page 4 contains the code for cutting, copying and pasting. To run this word
processor only page 1 needs to be loaded initially. If the user then wants to format some
characters so that they are in bold, then page 2 will have to be loaded. Similarly, if the user
wishes to copy and paste a piece of text, page 4 will have to be loaded. When other
facilities are needed, the appropriate page can be loaded. If the user now wishes to format
some more characters, the OS does not need to load page 2 again as it is already loaded.
Now, what happens if there is insufficient space for the new page to be loaded? As only the
page containing active instructions need to be loaded, the new page can overwrite a page
that is not currently being used. For example, suppose the user wishes to use paragraph
formatting; then the OS can load page 3 into the memory currently occupied by page 2.
Clearly, this means that programs can be written and used that are larger than the
There must be some system that decides which pages to overwrite. There are many
systems such as overwrite the page that has not been used for the longest period of time,
replace the page that has not recently been used or the first in first out method. All of these
create overheads for the OS.
To further complicate matters not every page can be overwritten. Some pages contain a
job's data that will change during the running of a program. To keep track of this the OS
keeps a flag for each page that can be initially set to zero. If the content of the page
changes, the flag can be set to 1. Now, before overwriting a page, the OS can see if that
page has been changed. If it has, then the OS will save the page before loading a new page
in its place. The OS now has to both load and save pages. If the memory is very full, this
loading and saving can use up a great deal of time and can mean that most of the
processor's time is involved in swapping pages. This situation is called disk threshing.
Systems can use both multi-programming and virtual memory. Also, virtual memory can
use segmentation as well as paging although this can become very complex.
Spooling was mentioned in Section 4.1.1 and is used to place input and output on a fast
access device, such as disk, so that slow peripheral devices do not hold up the processor.
In a multi-programming, multi-access or network system, several jobs may wish to use the
peripheral devices at the same time. It is essential that the input and output for different
jobs do not become mixed up. This can be achieved by using Simultaneous Peripheral
Operations On-Line (spooling).
Suppose two jobs, in a network system, are producing output that is to go to a single
printer. The output is being produced in sections and must be kept separate for each job.
Opening two files on a disk, one for each job, can do this. Suppose we call these files File1
and File2. As the files are on disk, job 1 can write to File1 whenever it wishes and job 2 can
write to File2. When the output from a job is finished, the name (and other details) of the
file can be placed in a queue. This means that the OS now can send the output to the
printer in the order in which the file details enter the queue. As the name of a file does not
enter the queue until all output from the job to the corresponding file is complete, the
output from different jobs is kept separate.
Spooling can be used for any number of jobs. It is important to realise that the output itself
is not placed in the queue. The queue simply contains the details of the files that need to be
printed so that the OS sends the contents of the files to the printer only when the file is
complete. The part of the OS that handles this task is called the spooler or print spooler.
It should be noted that spooling not only keeps output from different jobs separate, it also
saves the user having to wait for the processor until the output is actually printed by a
printer (a relatively slow device). Spooling is used on personal computers as well as large
computer systems capable of multi-programming.
4.1.6 Desktop PC Operating Systems
There are basically two types of OS used on PC's. These are command driven and those
that use a graphical user interface (GUI). Probably the best known of these are MS-DOS
(command driven) and Windows (GUI). These differ in the way the user uses them and in
the tasks that can be carried out.
All OS's for PC's allow the user to copy, delete and move files as well as letting the user
create an hierarchical structure for storing files. They also allow the user to check the disk
and tidy up the files on the disk.
However, Windows allows the user to use much more memory than MS-DOS and it allows
multi-tasking. This is when the user opens more than one program at a time and can move
from one to another. Try opening a word processor and the clipboard in Windows at the
same time. Adjust the sizes of the windows so that you can see both at the same time. Now
mark a piece of text and copy it to the clipboard. You will see the text appear in the
clipboard window although it is not the active window. This is because the OS can handle
both tasks apparently at the same time. In fact the OS is swapping between the tasks so
fast that the user is not aware of the swapping.
Another good example of multi-tasking is to run the clock program while using another
program. You will see that the clock is keeping time although you are using another piece
of software. Try playing a CD while writing a report!
The OS not only offers the user certain facilities, it also provides application software with
I/O facilities. In this Section you will see how an OS is loaded and how it controls the PC.
This section, printed with a shaded background, is not required by the OCR Computing
Specification, but may be interesting and useful for understanding how the system works.
When a PC is switched on, it contains only a very few instructions. The first step the
computer does is to run the power-on-self-test (POST) routine that resides in permanent
memory. The POST routine clears the registers in the CPU and loads the address of the first
instruction in the boot program into the program counter. This boot program is stored in
read-only memory (ROM) and contains the basic input/output system (BIOS).
Control is now passed to the boot program which first checks itself and the POST program.
The CPU then sends signals to check that all the hardware is working properly. This
includes checking the buses, systems clock, RAM, disk drives and keyboard. If any of these
devices, such as the hard disk, contain their own BIOS, this is incorporated with the
system's BIOS. Often the BIOS is copied from a slow CMOS BIOS chip to the faster RAM
The PC is now ready to load the OS. The boot program first checks drive A to see if a disk
is present. If one is present, it looks for an OS on the disk. If no OS is found, an error
message is produced. If there is no disk in drive A, the boot program looks for an OS on
disk C. Once found, the boot program looks, in the case of Windows systems, for the files
IO.SYS and MSDOS.SYS. Once the files are found, the boot program loads the boot record,
about 512 bytes, which then loads IO.SYS. IO.SYS holds extensions to the ROM BIOS and
contains a routine called SYSINIT. SYSINIT controls the rest of the boot procedure.
SYSINIT now takes control and loads MSDOS.SYS which works with the BIOS to manage
files and execute programs.
The OS searches the root directory for a boot file such as CONFIG.SYS which tells the OS
how many files may be opened at the same time. It may also contain instructions to load
various device drivers. The OS tells MSDOS.SYS to load a file called COMMAND.COM. This
OS file is in three parts. The first part is a further extension to the I/O functions and it joins
the BIOS to become part of the OS. The second part contains resident OS commands, such
as DIR and COPY.
The files CONFIG.SYS and AUTOEXEC.BAT are created by the user so that the PC starts up
in the same configuration each time it is switched on.
The OS supplies the user, and applications programs, with facilities to handle input and
output, copy and move files, handle memory allocation and any other basic tasks.
In the case of Windows, the operating system loads into different parts of memory. The OS
then guarantees the use of a block of memory to an application program and protects this
memory from being accessed by another application program. If an application program
needs to use a particular piece of hardware, Windows will load the appropriate device
driver. Windows also uses virtual memory if an application has not been allocated sufficient
As mentioned above, Windows allows multi-tasking; that is, the running of several
applications at the same time. To do this, Windows uses the memory management
techniques described in Section 4.1.4. In order to multi-task, Windows gives each
application a very short period of time, called a time-slice. When a time-slice is up, an
interrupt occurs and Windows passes control to the next application. In order to do this, the
OS has to save the contents of the CPU registers at the end of a time-slice and load the
registers with the values needed by the next application. Control is then passed to the next
application. This is continued so that all the applications have use of the processor in turn.
If an application needs to use a hardware device, Windows checks to see if that device is
available. If it is, the application is given the use of that device. If not, the request is placed
in a queue. In the case of a slow peripheral such as a printer, Windows saves the output to
the hard disk first and then does the printing in the background so that the user can
continue to use the application. If further printing is needed before other printing is
completed, then spooling is used as described in Section 4.1.5.
Any OS has to be able to find files on a disk and to be able to store user's files. To do this,
the OS uses the File Allocation Table (FAT). This table uses a linked list to point to the
blocks on the disk that contain files. In order to do this the OS has a routine that will format
a disk. This simply means dividing the disk radially into sectors and into concentric circles
called tracks. Two or more sectors on a single track make up a cluster. This is shown in Fig.
A typical FAT table is shown in Fig 22.214.171.124. The first column gives the cluster number and
the second column is a pointer to the next cluster used to store a file. The last cluster used
has a null pointer (usually FFFFH) to indicate the end of the linking. The directory entry for a
file has a pointer to the first cluster in the FAT table. The diagram shows details of two files
stored on a disk.
In order to find a file, the OS looks in the directory for the filename and, if it finds it, the OS
gets the cluster number for the start of the file. The OS can then follow the pointers in the
FAT to find the rest of the file.
In this table any unused clusters have a zero entry. Thus, when a file is deleted, the
clusters that were used to save the file can be set to zero. In order to store a new file, all
the OS has to do is to find the first cluster with a zero entry and to enter the cluster number
in the directory. Now the OS only has to linearly search for clusters with zero entries to set
up the linked list.
It may appear that using linear searches will take a long time. However, the FAT table is
normally loaded into RAM so that continual disk accesses can be avoided. This will speed up
the search of the FAT.
Note that Windows 95/98 uses virtual FAT (VFAT) which allows files to be saved 32 bits at a
time (FAT uses 16 bits). It also allows file names of up to 255 characters. Windows 98 uses
FAT 32 which allows hard drives greater than 2 Gbytes to be formatted.
4.1.7 Network Operating Systems
This Section should be read in conjunction with Chapters 1.6 and 6.4.
The facilities provided by a NOS depend on the size and type of network. For example, in a
peer-to-peer network all the stations on the network have equal status. In this system one
station may act as a file server and another as a print server. At the same time, all the
stations are clients. A client is a computer that can be used by users of the network. A
peer-to-peer network has little security so the NOS only has to handle
If a network contains one or more servers, the NOS has to manage
hardware sharing (including print spooling),
the user interface.
File sharing allows many users to use the same file at the same time. In order to avoid
corruption and inconsistency, the NOS must only allow one user write access to the file,
other users must only be allowed read access. Also, the NOS must only allow users with
access rights permission to use files; that is, it must prevent unauthorised access to data. It
is important that users do not change system files (files that are needed by the NOS). It is
common practice for the NOS to not only make these files read only, but to hide them from
the users. If a user looks at the disk to see what files are present, these hidden files will not
appear. To prevent users changing read only files to read write files, and to prevent users
showing hidden files, the NOS does not allow ordinary users to change these attributes.
To ensure the security of data, the network manager gives users access rights. When users
log onto a network they must enter their user identity and password. The NOS then looks
up, in a table, the users' access rights and only allows them access to those files for which
access is permitted. The NOS also keeps a note of how the users want their desktops to
look. This means that when users log on they are always presented with the same screen.
Users are allowed to change how their desktops look and these are stored by the NOS for
As many users may use the network and its resources, it may be necessary for the NOS to
keep details of who has used the network, when and for how long and for what purpose. It
may also record which files the user has accessed. This is so that the user can be charged
for such things as printing, the amount of time that the network has been used and storage
of files. This part of the NOS may also restrict the users' amount of storage available, the
amount of paper used for printing and so on. From time to time the network manager can
print out details of usage so that charges may be made.
The NOS must share the use of applications such as word processors, spreadsheets and so
on. Thus when a user requests an application, the NOS must send a copy of that application
to the user's station.
Several users may well wish to use the same hardware at the same time. This is
particularly true of printers. When a user sends a file for printing, the file is split into
packets. As many users may wish to use a printer, the packets from different users will
arrive at the print server and will have to be sorted so that the data from different users
are kept separate. The NOS receives these packets and stores the data in different files for
different users. When a particular file is complete, it can be added to the print queue as
described in Section 4.1.5.
The NOS must also ensure that users' files are saved on the server and that they cannot be
accessed by other users. To do this the network manager will allocate each user a fixed
amount of disk space and the NOS will prevent a user exceeding the amount of storage
allocated. If a user tries to save work when there is insufficient space left, the NOS will ask
the user to delete some files before the user can save any more. In order to do this, the
server's hard drive may be partitioned into many logical drives. This means that, although
there may be only one hard drive, different parts of it can be treated as though they are
different drives. For example, one part may be called the H drive which is where users are
allowed to save their work. This drive will be divided up into folders, each of which is
allocated to a different user. Users only have access to their own folders but can create
sub-folders in their own folders. The NOS must provide this service as well as preventing
users accessing other users' folders. Another part of the drive may be called (say) the U
drive where some users can store files for other users who will be allowed to retrieve, but
not alter, them unless they are saved in the user's own area. The NOS will also only allow
access to certain logical drives by a restricted set of users.
For all the above to work, the NOS will have to handle communications between stations
and servers. Thus, the NOS is in two parts. One part is in each station and the other is in
the server(s). These two parts must be able to communicate with one another so that
messages and data can be sent around the network. The need for rules to ensure that
communication is successful was explained in Chapter 1.6.
Finally, the NOS must provide a user interface between the hardware, software and user.
This has been discussed in Section 4.1.6, but a NOS has to offer different users different
interfaces. When a user logs onto a network, the NOS looks up the needs of the user and
displays the appropriate icons and menus for that user, no matter which station the user
uses. The NOS must also allow users to define their own interfaces within the restrictions
laid down by the network manager.
It must be remembered that users must not need an understanding of all the tasks
undertaken by the NOS. As far as users are concerned they are using a PC as if it were
solely for their own use. The whole system is said to be transparent to the user. This simply
means that users are unaware of the hardware and software actions. A good user interface
has a high level of transparency and this should be true of all operating systems.
4.1.8 Example Questions
1. Explain how an interrupt is handled by a processor which is working on another task. (4)
A. -At some point in the cycle/at the end of an instruction
-the processor will check to see if there are any outstanding interrupts.
-If there are, the current job is suspended and…
-the contents of the special registers are stored so that it can be restarted later
-interrupts are then serviced until all have been dealt with…
-Control is returned to the original job. (4)
Notes: The handling of interrupts is rather more complex than described here and the
manipulation of the special registers has to be explained in more detail, but the above
answer covers all the points that arise in this section.
2. State three different types of interrupt that may occur and say in what order the three
interrupts would be handled if they all arrived at the processor together, giving a reason for
your answer. (5)
A. -I/O interrupt like the printer running out of data to print and wanting the buffer refilling.
-Timer interrupt where the processor is being forced to move onto some other task.
-Program interrupt where the processor is being stopped from carrying out an illegal
operation that has been specified in the program code.
-Hardware interrupt the most serious of which is power failure.
- The order is from the bottom up. The most serious is power failure because no other
tasks can be carried out if there is no power, so the safe powering down of the system
must be paramount.
-Contrast that with the printer wanting more data before it can print any more out. Does it
really matter if the printer has to wait a few more seconds?
Notes: There are far more points to take into account about interrupts. How does the
processor decide which of a number of interrupts is the most important? Or is the interrupt
more important than the work it was doing anyway? An interrupt is simply a signal, it is not
a piece of program code, so how does the processor know what to do?
3. Explain the difference between
(i) Input/output bound jobs and
(ii) Processor bound jobs.
Explain how the computer should prioritise the two types of job if it is going to make
maximum use of the system. (4)
A. -I/O jobs are those that require relatively little processing but do need to use the
peripheral devices substantially
-Processor bound jobs require a large amount of processor time and very little use of the
various peripheral devices.
-I/O jobs are given the highest priority…
-in order to keep the peripherals working as much as possible…
-and because they would be blocked and ‘never see’ the processor if the others had
4. Describe two types of scheduling that are used by computer processing systems. (4)
A. The answers are to be found straight out of the notes on this chapter, page 10.
5. Explain how interrupts are used in a round robin scheduling operating system. (3)
A. -Each job is given a set amount of processor time.
-At the end of the time available for a job, it is interrupted.
-The operating system inspects the queue of jobs still to be processed and
-if it is not empty allocates the next amount of processor time to the job first in the queue.
-The previous job goes to the end of the queue.
-Use of priorities to determine place in the queue.
-Need for priorities to change according to amount of recent processing time they have
6. a) Explain the difference between paging and segmenting when referring to memory
management techniques. (2)
b) Describe how virtual memory can allow a word processor and a spreadsheet to run
simultaneously in the memory of a computer even though both pieces of software are too
large to fit into the computer’s memory. (3)
A. a) -They are both methods of dividing up the available memory space into more
-Paging involves memory being divided into equal size areas.
-Segmenting involves areas of different size dependent upon the contents needing to be
Notes: The first mark point does not really answer the question because it does not
provide a difference, however it does give a context to the rest of the answer so is worth
b) -The software code is divided up into parts that have some level of equivalence.
-Those most commonly used routines are kept together
-These are loaded into memory and other routines are
-only loaded when the user calls upon them.
-Thus they give the impression of being permanently available. (3)
Notes: A nice question. It is not only a standard bookwork question but it also relates
specifically to an occurrence seen, if not understood, by computer users every day. A
question with a little more detail about the software being used might expect a more
detailed answer about the sort of things that would be in each of the pages, but it would
also have more marks available.
7. Describe how a PC operating system uses a file allocation table to find files when
A. -The disk surface must be divided up into small areas
-Files are stored in these small areas
-Each file will normally use more than one area
-The table of files has an entry pointing to the first area on the disk surface used by that
-a pointer to the next area.
-Each subsequent area has a pointer to the next area, as in a linked list with…
-a null value to signify the end of the file.