CSC 539: Operating Systems Structure and Design
Spring 2005
HW2: CPU Simulation with I/O Interrupts
Exercises from the book: 5.3, 5.4, and 5.5
In HW1, you compiled and modified a simulation of a simple
CPU. User jobs were assumed to be loaded all at once, and executed in order. By
setting constants, either batch processing (large load delay between jobs, large time
slices) or timesharing (small load delay between jobs, small time slices) could be
simulated. For the second part of this assignment, you will work with a more robust
extension to that
program, which takes I/O interrupts into account.
User jobs will be entered in a file, as before, but the format of the file will be
slightly more complex. Each line in the file defines a single job, starting with the
origination time for the job, the job ID number, and the lengths of CPU bursts
for that job. You may assume that the jobs are ordered by start time, with earlier
arrivals coming first in the file. For example:
3 1 6 3
5 2 4 40 4 20
15 3 10
Here, Job #1 originates at time 3. The job requires 6 units of computation,
followed by an I/O operation, followed by another 3 units of computation. Job #2
originates at time 5, and requires 4 CPU bursts of 4, 40, 4 and 20 units, separated by
I/O
operations. Job #3 originates at time 15 and requires 10 units of computation (with no
I/O operations). To model the fact that different I/O operations may take different
amounts
of time, the duration of each I/O operations will be determined as a random number from
a
specified range (whose limits are input by the user).
The extended CPU simulator that handles this new type of data file is provided for
you in
the following files:
CPU.cpp,
Job.h,
Job.cpp,
JobStream.h,
JobStream.cpp,
CPUScheduler.h,
CPUScheduler.cpp,
Die.h,
Die.cpp.
The details of this program will be discussed in class.
- Using the default settings (TIME_SLICE=10, LOAD_DELAY=1,
IO_MIN_DELAY=5,
IO_MAX_DELAY=15),
run the simulator on the sample data from above and print a log of the execution.
Recall: you can copy the contents of the output window by right-clicking
within the window, selecting Select All from the menu, and then pasting
that text into whatever text editor or word processor you choose.
- Next, modify the CPUScheduler class so that it maintains the following
statistics:
- CPU utilization, i.e., 100.0*(time CPU is doing useful work)/(total elapsed time)
- average turnaround time for completed jobs, i.e., average amount of time from arrival to
completion
- average wait time for completed jobs, i.e., average amount of time each job sat waiting in
the ready queue
Similar to HW1, you should add a new member function named displayStats that displays
these
statistics. Note that
when reporting average turnaround time and average wait time, the function should only
consider completed
jobs. Also, note that time spent int I/O queue should not count toward the wait time for a job,
only time spent waiting in the ready queue. Once you have displayStats working, you
should add a call in
CPU.cpp to display the final statistics for the simulation.
- Using the default settings (TIME_SLICE=10, LOAD_DELAY=1,
IO_MIN_DELAY=5,
IO_MAX_DELAY=15), run the
simulator on the sample data from above and print a log of the execution.
- Suppose you had a sequence of large, CPU-bound jobs. That is, all of the jobs were
of roughly the same size and consisted of large blocks of computation with few I/O
operations. If you increased the time slice significantly, would you expect CPU
utilization to get better or
worse?
How would average turnaround and wait times be affected? Justify your answers with
written explanations and
with references to specific simulation data.
- Suppose you had a mixed sequence of I/O-bound jobs. That is, job lengths varied
and each job contained many I/O operations.
If you increased the time slice significantly, would you expect CPU utilization to get
better or worse?
How would average turnaround and wait times be affected? Justify your answers with
written explanations and
with references to specific simulation data.
- All of the simulations so far have assumed that the time required for I/O
operations is roughly
comparable to the time slice. Suppose, instead, that the delay for I/O operations was
large compared to the time
slice. How would this affect system performance? Would this type of system favor
CPU-bound jobs, I/O jobs, or
neither? Justify your answers with written explanations and with references to specific
simulation data.