Trace-Driven Modeling and Analysis of CPU Scheduling in a Multiprogramming System --- Sherman et al., 1972 - the primary purpose of a multiprogrammed OS is to bring into balance the demand for processing and I/O facilities presented by the user job load; - "external" job scheduling - efforts are made to load into the working memory a set of jobs which includes both computer and I/O bound jobs; - the resource demands of a given job may fluctuate over time (duhh!); - 1972's view of scheduling: give higher priorities to I/O bound jobs and used the past performance of a job as a future estimate; - there should be a preemptive scheduling mechanism and the CPU burst time should be limited (either a fixed one or use round-robin); - preemptive scheduling: decisions are made whenever a job joins the CPU queue; - the machine: 128k of memory and approx. 2M of disk! - the 128k of memory are allocated (by software) to seven or fewer "control points", which are virtual CPUs. Only one program at a time may be assigned to a control point. In the paper, 5 control points are used to user jobs; - their theoretical "best" sch. algorithm: give the CPU to the job that will compute for the shortest period of time before issuing an i/o request; - for comparing algorithms, they used throughput time (simulation total time) and CPU efficiency (CPU time / throughput); - no tested algorithm was better than their "best" algorithm or worse than their "worse" algorithm (the inverse of the "best"); - in nonpreemptive algorithms, scheduling is done only when the program performs an i/o operation; - they didn't count context switch overhead; - some algorithms used some sort of aging to calculate the "next CPU service time" (time until next i/o); Results: -------- - RR is surprisingly good; predictive methods without bounds sucks; Conclusions: ----------- - multiprogramming was very limited, and that's maybe the reason why related works didn't explore preemptive scheduling much; Problems: -------- - it seems to me that starvation may occur;