CSC 539: Operating Systems Structure and Design
Fall 2003

HW2: System Simulation with I/O Interrupts


In HW1, you compiled and modified a simulation of a simple system. User jobs were assumed to be loaded all at once, and executed in order. By setting constants, either batch processing (large load delay between jobs, large time slices) or multiprogramming (small load delay between jobs, small time slices) could be simulated. For this assignment, you will work with a more robust extension to that program, which takes I/O interrupts into account.

User jobs will be entered in a file, as before, but the format of the file will be slightly more complex. Each line in the file defines a single job, starting with the origination time for the job, the job ID number, and the lengths of CPU bursts for that job. For example:

3 1 6 3 5 2 4 40 4 20 12 3 10

Here, Job #1 originates at time 3. The job requires 6 units of computation, followed by an I/O operation, followed by another 3 units of computation. Job #2 originates at time 5, and requires 4 CPU bursts of 4, 40, 4 and 20 units, separated by I/O operations. Job #3 originates at time 12 and requires 10 units of computation (with no I/O operations). To model the fact that different I/O operations may take different amounts of time, the duration of each I/O operations will be determined as a random number from a specified range (constants IO_MIN_DELAY and IO_MAX_DELAY provide the lower and upper bounds).

  1. The extended multiprogramming simulator that handles this new type of data file is provided for you in the following files:   multi.cpp,   Job.h,   Job.cpp,   JobStream.h,   JobStream.cpp,   CPUScheduler.h,   CPUScheduler.cpp,   RandGen.h. The details of this program will be discussed in class. You may note, however, that the main program (multi.cpp) is much simpler than in HW1. To better manage the complexity of the extended simulator, and also to more easily enable future changes, most of the scheduling details have been encapsulated in a class named CPUScheduler. In addition, the code utilizes standard C++ libraries for queue and priority_queue.

  2. Using the default settings (TIME_SLICE=10, LOAD_DELAY=2, IO_MIN_DELAY=5, IO_MAX_DELAY=15), run the simulator on the sample data from above and print a log of the execution. Recall: you can copy the contents of the output window by right-clicking within the window, selecting Select All from the menu, and then pasting that text into whatever text editor or word processor you choose.

  3. Next, modify the CPUScheduler class so that it maintains the following statistics: You should add a new member function named DisplayStats that displays these statistics. Note that when reporting average turnaround time and average wait time, the function should only consider completed jobs. Once you have DisplayStats working, you should add a call in multi.cpp to display the final statistics for the simulation.

  4. Using the default settings (TIME_SLICE=10, LOAD_DELAY=2, IO_MIN_DELAY=5, IO_MAX_DELAY=15), run the simulator on the sample data from above and print a log of the execution.

  5. Suppose you had a sequence of large, CPU-bound jobs. That is, all of the jobs were of roughly the same size and consisted of large blocks of computation with few I/O operations. If you increased the time slice significantly, would you expect CPU utilization to get better or worse? How would average turnaround and wait times be affected? Justify your answers with written explanations and with references to specific simulation data.

  6. Suppose you had a mixed sequence of I/O-bound jobs. That is, job lengths varied and each job contained many I/O operations. If you increased the time slice significantly, would you expect CPU utilization to get better or worse? How would average turnaround and wait times be affected? Justify your answers with written explanations and with references to specific simulation data.

  7. All of the simulations so far have assumed that the time required for I/O operations is roughly comparable to the time slice. Suppose, instead, that the delay for I/O operations was large compared to the time slice. How would this affect system performance? Would this type of system favor CPU-bound jobs, I/O jobs, or neither? Justify your answers with written explanations and with references to specific simulation data.