CPS 356 Lecture notes: Allocation and Thrashing



Coverage: [OSCJ] §§9.5-9.6 (pp. 418-427)


Overview

    When should we run the page replacement algorithm?

    Two main questions in the study of virtual memory:

    • how do we replace pages?
    • how do we allocate frames?


Allocation of Frames

  • maintain 3 free frames at all times
  • consider a machine where all memory-reference instructions have only 1 memory address → need 2 frames of memory
    • now consider indirect modes of addressing
    • potentially every page in virtual memory could be touched, and the entire virtual memory must be in physical memory
    • place a limit the levels of indirection
  • the minimum number of frames per process is defined by the computer architecture
  • the maximum number of frames is defined by the amount of available physical memory


Allocation Algorithms

  • m frames, n processes
  • equal allocation: give each process m/n frames
  • alternative is to recognize that various processes will need different amounts of memory
    • consider
      • 1k frame size
      • 62 free frames
      • student process requiring: 10k
      • interactive database requiring: 127k

      it makes no sense to give each process 31 frames

      the student process needs no more than 10 so the additional 21 frames are wasted

  • proportional allocation:
    • m frames available
    • size of virtual memory for process pi is si
    • S = Sigma si
    • ai = (si/S) * m

    student process gets 4 frames = (10/137)*62

    database gets 57 frames = (127/137)*62

  • what about priority? use priority rather than relative size to determine ratio of frames


Global vs. Local Allocation

  • is page-replacement global or local?
  • global: one process can select a replacement frame from the set of all frames (i.e., one process can take a frame from another or itself) (e.g., high priority processes can take the frames of low priority processes)
  • local: each process can only select from its own set of allocated frames
  • local page replacement is more predictable; depends on no external factors
  • a process which uses global page replacement cannot predict the page fault rate; may execute in 0.5 seconds once and 10.3 on another run
  • overall, global replacement results in greater system throughput


Thrashing

  • simple thrashing: 1 process of 2 pages only allocated 1 frame
  • high page activity is called thrashing
  • a process is thrashing if it spends more time paging than executing
  • scenario:

    The process scheduler see that CPU utilization is low.

    So we increase the degree of multiprogramming by introducing a new process into the system.

    One process now needs more frames.

    It starts faulting and takes away frames from other processes. (i.e., global-page replacement).

    These processes need those pages and thus they start to fault, taking frames from other processes.

    These faulting processes must use the paging device to swap pages in and out.

    As they queue up on the paging device, the ready-queue empties.

    However, as processes wait on the paging device, CPU utilization decreases.

    The process scheduler sees the decreasing CPU utilization and increases the degree of multiprogramming, ad infinitum.

    The result is thrashing, page-fault rates increase tremendously, effective memory access time increases, no work is getting done, because all the processes are spending their time paging.

  • Fig. 9.18
  • we can limit the effects of thrashing by using a local replacement algorithm (or priority replacement algorithm)

      but this only partially solves the problem

      If one process starts thrashing, it cannot steal frames of another process and cause the later to thrash.

      However, the thrashing processes will be in the paging device queue which will increase the time for a page fault to be serviced and, therefore, the effective access time will increase even for those processes not thrashing.

  • to really prevent thrashing, we must provide processes with as many frames as they need.

      But how do we know how many frames a process "needs"?

      Look at how many frames a process actually "uses".


Locality Model

  • as a process executes, it moves from locality to locality
  • a locality is a set of pages used together. A program is generally composed of several localities which may overlap.
  • for instance, when a function is called, it defines a new locality
  • the locality model is the basic unstated assumption behind cache; if accesses to any types of data were random rather than patterned, cache would be useless.
  • if we allocate enough for a process to accommodate its current locality, it will fault for the pages in its locality until all of these pages are in memory. Then it will not fault again until it changes locality.
  • if we allocate fewer frames than the size of the current locality, the process will thrash since it cannot keep in memory all the pages which it is actively uses.


Working Set Model

  • based on the assumption of locality
  • Δ defines the working-set window: some # of memory references
  • examine the most recent Δ page references
  • the set of pages in the most recent Δ is the working set or an approximation of the program's locality.
  • Fig. 9.20
  • the accuracy of the working set depends on the selection of Δ
  • if Δ is too small, it will not encompass the entire locality
  • if Δ is too large, it may overlap several localities
  • if Δ is ∞, the working set is the set of all pages touched during process execution
  • WSSi is working set size for process pi
  • D = Σ WSSi, where D is the total Demand from frames
  • if D > m, then thrashing will occur, because some processes will not have enough frames

  • Using the working-set strategy is simple:

  • the OS monitors the working set of each process
  • and allocates to that working set enough frames to provide it with its working-set size
  • if there are enough extra frames, a new process can be initiated
  • if the sum of the working set sizes increases, exceeding the total number of available frames, the OS selects a process to suspend
  • the working set strategy prevents thrashing while keeping the degree of multiprogramming as high as possible and optimizes CPU utilization.


Page Fault Frequency

  • working set model is successful but keeping track of the working set can become complex
  • using page-fault frequency (PFF) is a more direct approach to prevent thrashing
  • if PFF is too high, we know the process needs more frames
  • if PFF is too low, then we know the process has too many frames
  • Fig. 9.21


References

    [OSCJ] A. Silberschatz, P.B. Galvin, and G. Gagne. Operating Systems Concepts with Java. John Wiley and Sons, Inc., Eighth edition, 2010.

Return Home