banner



The Relocation Register Holds The Value -83968, How Many Kilobytes Was It Moved?

Chief Memory

References:

  1. Abraham Silberschatz, Greg Gagne, and Peter Baer Galvin, "Operating System Concepts, Ninth Edition ", Chapter 8

viii.1 Background

  • Manifestly retention accesses and retentiveness management are a very important function of modern calculator operation. Every instruction has to be fetched from retentiveness before information technology can be executed, and virtually instructions involve retrieving data from retentiveness or storing data in memory or both.
  • The advent of multi-tasking OSes compounds the complexity of retentiveness management, considering considering as processes are swapped in and out of the CPU, and so must their lawmaking and information be swapped in and out of memory, all at high speeds and without interfering with whatsoever other processes.
  • Shared memory, virtual memory, the nomenclature of retentiveness as read-only versus read-write, and concepts like copy-on-write forking all further complicate the issue.

8.1.1 Basic Hardware

  • It should exist noted that from the memory fries indicate of view, all memory accesses are equivalent. The memory hardware doesn't know what a particular role of retentiveness is being used for, nor does it care. This is most truthful of the Os every bit well, although non entirely.
  • The CPU can only admission its registers and main memory. It cannot, for instance, make direct access to the hard drive, so any data stored at that place must kickoff be transferred into the main retention chips earlier the CPU can work with it. ( Device drivers communicate with their hardware via interrupts and "retentivity" accesses, sending short instructions for example to transfer data from the hard drive to a specified location in main retention. The disk controller monitors the bus for such instructions, transfers the data, and then notifies the CPU that the data is there with another interrupt, merely the CPU never gets direct access to the disk. )
  • Retentivity accesses to registers are very fast, mostly one clock tick, and a CPU may be able to execute more than than ane auto instruction per clock tick.
  • Memory accesses to primary memory are comparatively slow, and may accept a number of clock ticks to consummate. This would require intolerable waiting by the CPU if information technology were not for an intermediary fast memory cache congenital into near modern CPUs. The basic idea of the enshroud is to transfer chunks of memory at a time from the main memory to the cache, and then to admission individual memory locations 1 at a time from the cache.
  • User processes must be restricted so that they only admission memory locations that "belong" to that detail process. This is usually implemented using a base register and a limit annals for each process, as shown in Figures 8.1 and 8.2 below. Every memory access made by a user procedure is checked against these two registers, and if a retentivity access is attempted outside the valid range, then a fatal fault is generated. The OS obviously has access to all existing memory locations, as this is necessary to bandy users' lawmaking and data in and out of retentiveness. It should also be obvious that changing the contents of the base and limit registers is a privileged activity, immune only to the Bone kernel.


Effigy 8.1 - A base and a limit annals define a logical addresss space


Effigy eight.two - Hardware address protection with base and limit registers

viii.1.two Address Binding

  • User programs typically refer to memory addresses with symbolic names such as "i", "count", and "averageTemperature". These symbolic names must be mapped or bound to physical memory addresses, which typically occurs in several stages:
    • Compile Time - If it is known at compile time where a program will reside in physical retentiveness, so absolute lawmaking can be generated by the compiler, containing actual concrete addresses. However if the load address changes at some later fourth dimension, then the program will take to exist recompiled. DOS .COM programs use compile time binding.
    • Load Time - If the location at which a programme will be loaded is not known at compile fourth dimension, then the compiler must generate relocatable code , which references addresses relative to the start of the program. If that starting address changes, then the program must be reloaded but not recompiled.
    • Execution Time - If a program tin be moved around in retention during the form of its execution, then binding must be delayed until execution fourth dimension. This requires special hardware, and is the method implemented by nearly modern OSes.
  • Figure 8.3 shows the various stages of the binding processes and the units involved in each stage:


Figure viii.3 - Multistep processing of a user programme

8.1.3 Logical Versus Physical Address Infinite

  • The address generated by the CPU is a logical address , whereas the address actually seen by the retentivity hardware is a physical accost .
  • Addresses bound at compile time or load fourth dimension take identical logical and physical addresses.
  • Addresses created at execution time, still, have unlike logical and physical addresses.
    • In this example the logical address is also known every bit a virtual address , and the two terms are used interchangeably by our text.
    • The set of all logical addresses used by a program composes the logical address space , and the gear up of all respective physical addresses composes the physical address space.
  • The run time mapping of logical to physical addresses is handled by the memory-management unit of measurement, MMU .
    • The MMU can have on many forms. One of the simplest is a modification of the base of operations-register scheme described earlier.
    • The base register is now termed a relocation annals , whose value is added to every memory request at the hardware level.
  • Note that user programs never come across physical addresses. User programs work entirely in logical address space, and any memory references or manipulations are done using purely logical addresses. Only when the address gets sent to the concrete memory chips is the concrete retentiveness address generated.


Figure viii.4 - Dynamic relocation using a relocation register

eight.one.4 Dynamic Loading

  • Rather than loading an entire programme into retention at once, dynamic loading loads up each routine as it is called. The advantage is that unused routines need never be loaded, reducing full memory usage and generating faster program startup times. The downside is the added complexity and overhead of checking to see if a routine is loaded every time it is chosen and then then loading it up if information technology is not already loaded.

8.1.5 Dynamic Linking and Shared Libraries

  • With static linking library modules get fully included in executable modules, wasting both deejay space and primary memory usage, because every program that included a certain routine from the library would have to have their own re-create of that routine linked into their executable code.
  • With dynamic linking , however, only a stub is linked into the executable module, containing references to the actual library module linked in at run fourth dimension.
    • This method saves deejay space, because the library routines do non need to exist fully included in the executable modules, only the stubs.
    • We will also learn that if the lawmaking section of the library routines is reentrant , ( meaning it does not modify the code while it runs, making it prophylactic to re-enter information technology ), then main memory can be saved by loading but one re-create of dynamically linked routines into retention and sharing the code amongst all processes that are meantime using it. ( Each process would have their own re-create of the data section of the routines, only that may exist small relative to the code segments. ) Obviously the OS must manage shared routines in memory.
    • An added benefit of dynamically linked libraries ( DLLs , also known as shared libraries or shared objects on UNIX systems ) involves easy upgrades and updates. When a plan uses a routine from a standard library and the routine changes, and so the programme must be re-congenital ( re-linked ) in lodge to contain the changes. Nevertheless if DLLs are used, so as long as the stub doesn't change, the program can be updated simply by loading new versions of the DLLs onto the system. Version information is maintained in both the program and the DLLs, then that a programme can specify a item version of the DLL if necessary.
    • In practice, the first fourth dimension a program calls a DLL routine, the stub will recognize the fact and will replace itself with the actual routine from the DLL library. Farther calls to the aforementioned routine will admission the routine directly and not incur the overhead of the stub access. ( Following the UML Proxy Pattern . )
    • ( Additional information regarding dynamic linking is bachelor at http://www.iecc.com/linker/linker10.html )

8.2 Swapping

  • A process must exist loaded into memory in guild to execute.
  • If in that location is not plenty memory available to keep all running processes in retentivity at the same fourth dimension, then some processes who are not currently using the CPU may take their retentiveness swapped out to a fast local disk chosen the bankroll store.

8.ii.one Standard Swapping

  • If compile-fourth dimension or load-time address binding is used, then processes must be swapped back into the same retention location from which they were swapped out. If execution time binding is used, then the processes can be swapped back into any available location.
  • Swapping is a very irksome process compared to other operations. For example, if a user process occupied 10 MB and the transfer rate for the backing store were twoscore MB per second, then it would accept 1/4 2nd ( 250 milliseconds ) just to practice the information transfer. Calculation in a latency lag of 8 milliseconds and ignoring head seek time for the moment, and further recognizing that swapping involves moving old data out as well every bit new information in, the overall transfer fourth dimension required for this bandy is 512 milliseconds, or over half a second. For efficient processor scheduling the CPU time piece should be significantly longer than this lost transfer time.
  • To reduce swapping transfer overhead, it is desired to transfer every bit little information as possible, which requires that the arrangement know how much retentivity a process is using, as opposed to how much it might employ. Programmers tin help with this by freeing up dynamic retentivity that they are no longer using.
  • It is important to swap processes out of retention just when they are idle, or more to the point, merely when there are no pending I/O operations. ( Otherwise the pending I/O operation could write into the wrong process's memory infinite. ) The solution is to either bandy only totally idle processes, or do I/O operations only into and out of OS buffers, which are then transferred to or from process's main retentivity as a second stride.
  • Most modern OSes no longer use swapping, considering it is too slow and there are faster alternatives bachelor. ( e.g. Paging. ) Nonetheless some UNIX systems will still invoke swapping if the system gets extremely full, so discontinue swapping when the load reduces again. Windows three.one would use a modified version of swapping that was somewhat controlled past the user, swapping process'due south out if necessary and then only swapping them dorsum in when the user focused on that particular window.


Figure viii.5 - Swapping of 2 processes using a disk every bit a bankroll store

eight.2.2 Swapping on Mobile Systems ( New Section in 9th Edition )

  • Swapping is typically not supported on mobile platforms, for several reasons:
    • Mobile devices typically utilise wink memory in place of more spacious hard drives for persistent storage, and then there is non as much space available.
    • Flash memory can but exist written to a express number of times before it becomes unreliable.
    • The bandwidth to flash memory is besides lower.
  • Apple tree's IOS asks applications to voluntarily free upward memory
    • Read-only data, e.g. code, is simply removed, and reloaded after if needed.
    • Modified data, e.1000. the stack, is never removed, but . . .
    • Apps that neglect to free upward sufficient retentiveness can be removed by the Os
  • Android follows a like strategy.
    • Prior to terminating a procedure, Android writes its application state to flash retention for quick restarting.

eight.3 Contiguous Memory Resource allotment

  • One approach to memory management is to load each procedure into a contiguous space. The operating system is allocated space showtime, commonly at either low or high retentivity locations, and then the remaining available retentivity is allocated to processes every bit needed. ( The OS is usually loaded low, because that is where the interrupt vectors are located, but on older systems role of the OS was loaded high to make more room in low memory ( within the 640K barrier ) for user processes. )

viii.3.1 Memory Protection ( was Memory Mapping and Protection )

  • The system shown in Figure 8.6 below allows protection against user programs accessing areas that they should not, allows programs to be relocated to different memory starting addresses as needed, and allows the retention space devoted to the OS to abound or compress dynamically equally needs change.


Figure 8.6 - Hardware support for relocation and limit registers

eight.iii.2 Memory Allocation

  • One method of allocating contiguous memory is to separate all bachelor memory into equal sized partitions, and to assign each process to their ain sectionalisation. This restricts both the number of simultaneous processes and the maximum size of each process, and is no longer used.
  • An alternate arroyo is to keep a list of unused ( free ) retentiveness blocks ( holes ), and to find a pigsty of a suitable size whenever a process needs to be loaded into memory. There are many dissimilar strategies for finding the "best" allocation of memory to processes, including the iii most commonly discussed:
    1. First fit - Search the list of holes until one is found that is big enough to satisfy the request, and assign a portion of that hole to that procedure. Any fraction of the hole not needed by the request is left on the gratis listing as a smaller pigsty. Subsequent requests may start looking either from the beginning of the list or from the point at which this search ended.
    2. Best fit - Allocate the smallest hole that is big plenty to satisfy the request. This saves large holes for other procedure requests that may demand them later on, simply the resulting unused portions of holes may exist as well small to be of any use, and will therefore be wasted. Keeping the free listing sorted tin can speed up the process of finding the right hole.
    3. Worst fit - Allocate the largest pigsty available, thereby increasing the likelihood that the remaining portion will be usable for satisfying time to come requests.
  • Simulations show that either first or best fit are improve than worst fit in terms of both time and storage utilization. Beginning and best fits are almost equal in terms of storage utilization, but first fit is faster.

8.3.three. Fragmentation

  • All the retention allocation strategies endure from external fragmentation , though first and best fits experience the problems more than so than worst fit. External fragmentation means that the bachelor memory is broken upwards into lots of little pieces, none of which is big enough to satisfy the next memory requirement, although the sum total could.
  • The amount of memory lost to fragmentation may vary with algorithm, usage patterns, and some blueprint decisions such as which finish of a pigsty to allocate and which end to save on the free list.
  • Statistical assay of starting time fit, for example, shows that for N blocks of allocated retentiveness, another 0.five N will be lost to fragmentation.
  • Internal fragmentation too occurs, with all memory resource allotment strategies. This is acquired by the fact that retentiveness is allocated in blocks of a fixed size, whereas the actual memory needed will rarely exist that exact size. For a random distribution of retention requests, on the average 1/two block will exist wasted per memory request, because on the average the final allocated block volition exist just half total.
    • Note that the same effect happens with difficult drives, and that modern hardware gives us increasingly larger drives and memory at the expense of e'er larger block sizes, which translates to more memory lost to internal fragmentation.
    • Some systems use variable size blocks to minimize losses due to internal fragmentation.
  • If the programs in retention are relocatable, ( using execution-time address binding ), then the external fragmentation trouble tin be reduced via compaction , i.e. moving all processes down to one finish of physical memory. This only involves updating the relocation register for each process, as all internal piece of work is done using logical addresses.
  • Another solution as we will see in upcoming sections is to allow processes to employ non-contiguous blocks of physical memory, with a separate relocation register for each cake.

8.4 Segmentation

viii.four.one Bones Method

  • Near users ( programmers ) practice non think of their programs as existing in one continuous linear address space.
  • Rather they tend to think of their memory in multiple segments , each dedicated to a item use, such as code, data, the stack, the heap, etc.
  • Memory segmentation supports this view by providing addresses with a segment number ( mapped to a segment base accost ) and an offset from the beginning of that segment.
  • For instance, a C compiler might generate 5 segments for the user code, library code, global ( static ) variables, the stack, and the heap, as shown in Effigy eight.7:


Figure 8.vii Programmer's view of a plan.


8.four.2 Segmentation Hardware

  • A segment table maps segment-offset addresses to physical addresses, and simultaneously checks for invalid addresses, using a system similar to the page tables and relocation base registers discussed previously. ( Note that at this point in the discussion of segmentation, each segment is kept in contiguous memory and may be of different sizes, but that partitioning can also be combined with paging as we shall see shortly. )


Figure 8.8 - Segmentation hardware


Figure 8.ix - Example of sectionalization

viii.5 Paging

  • Paging is a memory direction scheme that allows processes concrete memory to be discontinuous, and which eliminates problems with fragmentation by allocating memory in equal sized blocks known every bit pages .
  • Paging eliminates most of the problems of the other methods discussed previously, and is the predominant memory management technique used today.

8.5.1 Basic Method

  • The basic thought behind paging is to split up concrete memory into a number of equal sized blocks called frames , and to split a programs logical memory infinite into blocks of the same size called pages.
  • Whatever page ( from any process ) can be placed into any bachelor frame.
  • The folio table is used to look upward what frame a particular page is stored in at the moment. In the post-obit example, for instance, page two of the programme'southward logical memory is currently stored in frame 3 of physical memory:


Figure 8.10 - Paging hardware


Figure 8.11 - Paging model of logical and physical retentivity

  • A logical accost consists of two parts: A folio number in which the address resides, and an offset from the beginning of that page. ( The number of bits in the page number limits how many pages a single procedure can address. The number of bits in the offset determines the maximum size of each page, and should correspond to the system frame size. )
  • The folio table maps the page number to a frame number, to yield a physical address which also has two parts: The frame number and the get-go within that frame. The number of bits in the frame number determines how many frames the system tin address, and the number of bits in the first determines the size of each frame.
  • Page numbers, frame numbers, and frame sizes are adamant by the architecture, but are typically powers of two, assuasive addresses to be dissever at a certain number of bits. For example, if the logical address size is 2^1000 and the folio size is 2^north, so the loftier-society thousand-due north $.25 of a logical address designate the page number and the remaining n bits represent the outset.
  • Note also that the number of bits in the page number and the number of $.25 in the frame number do not take to be identical. The former determines the address range of the logical address space, and the latter relates to the physical address space.

  • ( DOS used to utilize an addressing scheme with 16 fleck frame numbers and 16-bit offsets, on hardware that only supported 24-bit hardware addresses. The upshot was a resolution of starting frame addresses finer than the size of a single frame, and multiple frame-offset combinations that mapped to the same physical hardware address. )
  • Consider the following micro example, in which a process has 16 bytes of logical memory, mapped in four byte pages into 32 bytes of physical memory. ( Presumably some other processes would be consuming the remaining sixteen bytes of physical retention. )


Effigy eight.12 - Paging example for a 32-byte retentiveness with 4-byte pages

  • Annotation that paging is like having a table of relocation registers, i for each page of the logical retention.
  • There is no external fragmentation with paging. All blocks of physical memory are used, and there are no gaps in between and no problems with finding the right sized hole for a particular chunk of retentiveness.
  • At that place is, however, internal fragmentation. Memory is allocated in chunks the size of a page, and on the average, the final page will simply be half total, wasting on the average half a folio of retentiveness per process. ( Possibly more, if processes keep their code and data in separate pages. )
  • Larger page sizes waste material more than retentivity, only are more efficient in terms of overhead. Modernistic trends have been to increase page sizes, and some systems even have multiple size pages to endeavor and make the best of both worlds.
  • Page table entries ( frame numbers ) are typically 32 bit numbers, allowing access to 2^32 physical page frames. If those frames are 4 KB in size each, that translates to 16 TB of addressable concrete memory. ( 32 + 12 = 44 bits of physical address infinite. )
  • When a process requests memory ( due east.g. when its code is loaded in from disk ), free frames are allocated from a gratis-frame listing, and inserted into that process's page table.
  • Processes are blocked from accessing anyone else'southward retention because all of their memory requests are mapped through their folio table. There is no manner for them to generate an address that maps into whatsoever other procedure's retention space.
  • The operating organization must go along track of each individual procedure's page table, updating it whenever the procedure'southward pages get moved in and out of retentiveness, and applying the right page tabular array when processing organisation calls for a particular process. This all increases the overhead involved when swapping processes in and out of the CPU. ( The currently active folio table must be updated to reflect the process that is currently running. )


Figure 8.xiii - Free frames (a) before allocation and (b) after resource allotment

8.v.ii Hardware Support

  • Folio lookups must exist done for every memory reference, and whenever a process gets swapped in or out of the CPU, its folio table must be swapped in and out likewise, along with the instruction registers, etc. It is therefore appropriate to provide hardware support for this operation, in order to make information technology as fast as possible and to make process switches every bit fast as possible also.
  • One option is to utilise a set of registers for the page table. For example, the DEC PDP-11 uses 16-flake addressing and viii KB pages, resulting in just viii pages per process. ( It takes 13 $.25 to address 8 KB of commencement, leaving just 3 bits to define a folio number. )
  • An alternate option is to store the folio table in chief memory, and to utilise a single register ( called the page-table base register, PTBR ) to record where in memory the folio table is located.
    • Process switching is fast, because simply the unmarried annals needs to be changed.
    • All the same retentiveness admission just got half as fast, because every memory access at present requires two memory accesses - I to fetch the frame number from memory and and so another one to access the desired memory location.
    • The solution to this problem is to use a very special high-speed memory device called the translation look-aside buffer, TLB.
      • The do good of the TLB is that information technology can search an entire table for a cardinal value in parallel, and if it is found anywhere in the table, then the corresponding lookup value is returned.


      Figure 8.14 - Paging hardware with TLB

      • The TLB is very expensive, however, and therefore very small. ( Not big plenty to concord the entire page tabular array. ) It is therefore used as a cache device.
        • Addresses are first checked against the TLB, and if the info is not there ( a TLB miss ), so the frame is looked upwards from chief retentiveness and the TLB is updated.
        • If the TLB is full, then replacement strategies range from least-recently used, LRU to random.
        • Some TLBs let some entries to be wired down , which means that they cannot be removed from the TLB. Typically these would be kernel frames.
        • Some TLBs store address-space identifiers, ASIDs , to continue runway of which process "owns" a particular entry in the TLB. This allows entries from multiple processes to exist stored simultaneously in the TLB without granting one procedure access to some other process's retentiveness location. Without this characteristic the TLB has to be flushed clean with every procedure switch.
      • The percentage of time that the desired data is found in the TLB is termed the hit ratio .
      • ( Eighth Edition Version: ) For example, suppose that it takes 100 nanoseconds to admission primary memory, and simply 20 nanoseconds to search the TLB. So a TLB hitting takes 120 nanoseconds total ( 20 to observe the frame number and and then another 100 to go become the data ), and a TLB miss takes 220 ( 20 to search the TLB, 100 to get get the frame number, and then another 100 to go go the data. ) So with an 80% TLB hit ratio, the average retentiveness admission time would be:

      0.fourscore * 120 + 0.20 * 220 = 140 nanoseconds

      for a twoscore% slowdown to get the frame number. A 98% striking rate would yield 122 nanoseconds average access time ( you should verify this ), for a 22% slowdown.

      • ( Ninth Edition Version: ) The 9th edition ignores the 20 nanoseconds required to search the TLB, yielding

      0.eighty * 100 + 0.20 * 200 = 120 nanoseconds

      for a twenty% slowdown to get the frame number. A 99% hit rate would yield 101 nanoseconds boilerplate access time ( yous should verify this ), for a 1% slowdown.

eight.5.3 Protection

  • The page table can also help to protect processes from accessing memory that they shouldn't, or their own retentiveness in means that they shouldn't.
  • A bit or bits can be added to the folio table to classify a page as read-write, read-merely, read-write-execute, or some combination of these sorts of things. And so each memory reference can exist checked to ensure it is accessing the retentiveness in the appropriate mode.
  • Valid / invalid bits can be added to "mask off" entries in the folio table that are not in use past the current process, every bit shown by case in Figure viii.12 below.
  • Notation that the valid / invalid bits described above cannot block all illegal memory accesses, due to the internal fragmentation. ( Areas of memory in the terminal page that are not entirely filled by the procedure, and may contain information left over by whoever used that frame final. )
  • Many processes exercise not use all of the page tabular array available to them, especially in modern systems with very large potential page tables. Rather than waste product retentivity past creating a full-size page tabular array for every procedure, some systems use a page-table length register, PTLR , to specify the length of the page table.


Figure viii.15 - Valid (v) or invalid (i) flake in page table

viii.5.iv Shared Pages

  • Paging systems can make information technology very easy to share blocks of memory, by simply duplicating page numbers in multiple folio frames. This may be done with either code or data.
  • If code is reentrant , that means that it does not write to or modify the code in any manner ( information technology is non self-modifying ), and it is therefore safe to re-enter it. More than importantly, it means the code can be shared by multiple processes, and so long as each has their own copy of the data and registers, including the instruction register.
  • In the example given beneath, three different users are running the editor simultaneously, merely the code is only loaded into retentiveness ( in the page frames ) one time.
  • Some systems also implement shared memory in this fashion.


Figure 8.16 - Sharing of code in a paging environment

8.6 Structure of the Folio Table

eight.six.one Hierarchical Paging

  • Well-nigh modern estimator systems back up logical address spaces of 2^32 to 2^64.
  • With a 2^32 address space and 4K ( two^12 ) page sizes, this leave 2^twenty entries in the folio table. At 4 bytes per entry, this amounts to a four MB page tabular array, which is too large to reasonably keep in face-to-face memory. ( And to swap in and out of memory with each process switch. ) Note that with 4K pages, this would have 1024 pages just to hold the page table!
  • 1 option is to use a two-tier paging system, i.eastward. to folio the page table.
  • For instance, the 20 bits described in a higher place could be broken downwardly into ii x-fleck folio numbers. The first identifies an entry in the outer page tabular array, which identifies where in memory to discover 1 page of an inner page tabular array. The 2nd 10 bits finds a specific entry in that inner page table, which in turn identifies a particular frame in physical memory. ( The remaining 12 bits of the 32 bit logical address are the beginning within the 4K frame. )


Figure 8.17 A two-level page-table scheme


Figure 8.18 - Address translation for a ii-level 32-scrap paging compages

  • VAX Architecture divides 32-bit addresses into four equal sized sections, and each folio is 512 bytes, yielding an accost grade of:

  • With a 64-bit logical accost space and 4K pages, there are 52 bits worth of page numbers, which is still too many even for two-level paging. Ane could increment the paging level, but with 10-bit folio tables it would take 7 levels of indirection, which would be prohibitively boring memory access. Then some other approach must exist used.

64-bits Two-tiered leaves 42 bits in outer table

Going to a quaternary level still leaves 32 $.25 in the outer table.

8.half dozen.2 Hashed Page Tables

  • I mutual data structure for accessing data that is sparsely distributed over a broad range of possible values is with hash tables . Figure 8.16 below illustrates a hashed page table using chain-and-saucepan hashing:


Figure 8.19 - Hashed page table

8.6.iii Inverted Page Tables

  • Another approach is to apply an inverted page tabular array . Instead of a table listing all of the pages for a particular procedure, an inverted page table lists all of the pages currently loaded in memory, for all processes. ( I.east. in that location is one entry per frame instead of one entry per page . )
  • Access to an inverted page table can be slow, equally it may be necessary to search the entire table in order to find the desired page ( or to discover that it is not at that place. ) Hashing the table tin can aid speedup the search process.
  • Inverted page tables prohibit the normal method of implementing shared retentiveness, which is to map multiple logical pages to a common physical frame. ( Because each frame is now mapped to 1 and simply one procedure. )


Effigy 8.20 - Inverted page tabular array

8.6.iv Oracle SPARC Solaris ( Optional, New Section in 9th Edition )

8.7 Example: Intel 32 and 64-bit Architectures ( Optional )

viii.7.one.1 IA-32 Partition

  • The Pentium CPU provides both pure partitioning and segmentation with paging. In the latter case, the CPU generates a logical accost ( segment-first pair ), which the segmentation unit converts into a logical linear accost, which in turn is mapped to a concrete frame by the paging unit of measurement, as shown in Figure eight.21:


Effigy 8.21 - Logical to concrete address translation in IA-32

8.vii.1.1 IA-32 Partitioning

  • The Pentium architecture allows segments to be as large as 4 GB, ( 24 bits of offset ).
  • Processes can have as many as 16K segments, divided into two 8K groups:
    • 8K individual to that detail procedure, stored in the Local Descriptor Table, LDT.
    • 8K shared amidst all processes, stored in the Global Descriptor Tabular array, GDT.
  • Logical addresses are ( selector, beginning ) pairs, where the selector is made up of 16 bits:
    • A 13 fleck segment number ( up to 8K )
    • A 1 bit flag for LDT vs. GDT.
    • 2 bits for protection codes.

    • The descriptor tables contain 8-byte descriptions of each segment, including base of operations and limit registers.
    • Logical linear addresses are generated by looking the selector upwards in the descriptor tabular array and calculation the appropriate base address to the commencement, as shown in Figure viii.22:


Figure 8.22 - IA-32 sectionalization

eight.vii.i.2 IA-32 Paging

  • Pentium paging usually uses a two-tier paging scheme, with the offset 10 bits being a page number for an outer page table ( a.k.a. folio directory ), and the next x bits existence a page number within one of the 1024 inner page tables, leaving the remaining 12 bits as an offset into a 4K folio.

  • A special fleck in the folio directory tin can indicate that this page is a 4MB page, in which case the remaining 22 bits are all used every bit start and the inner tier of page tables is not used.
  • The CR3 annals points to the page directory for the current process, as shown in Figure viii.23 beneath.
  • If the inner page table is currently swapped out to disk, then the page directory will accept an "invalid flake" gear up, and the remaining 31 bits provide information on where to observe the swapped out page table on the disk.


Figure 8.23 - Paging in the IA-32 compages.


Figure viii.24 - Page address extensions.

8.vii.two x86-64


Figure 8.25 - x86-64 linear address.

8.eight Example: ARM Architecture ( Optional )


Figure eight.26 - Logical address translation in ARM.


Old 8.7.3 Linux on Pentium Systems - Omitted from the Ninth Edition

  • Because Linux is designed for a wide variety of platforms, some of which offer only limited support for partitioning, Linux supports minimal segmentation. Specifically Linux uses only 6 segments:
    1. Kernel code.
    2. Kerned data.
    3. User code.
    4. User data.
    5. A task-state segment, TSS
    6. A default LDT segment
  • All processes share the same user code and data segments, considering all procedure share the aforementioned logical address infinite and all segment descriptors are stored in the Global Descriptor Table. ( The LDT is generally not used. )
  • Each process has its ain TSS, whose descriptor is stored in the GDT. The TSS stores the hardware state of a process during context switches.
  • The default LDT is shared by all processes and by and large non used, but if a process needs to create its own LDT, it may do so, and utilise that instead of the default.
  • The Pentium compages provides two bits ( 4 values ) for protection in a segment selector, but Linux only uses 2 values: user mode and kernel mode.
  • Because Linux is designed to run on 64-bit likewise every bit 32-scrap architectures, information technology employs a three-level paging strategy as shown in Figure 8.24, where the number of $.25 in each portion of the address varies by architecture. In the example of the Pentium architecture, the size of the middle directory portion is set to 0 bits, effectively bypassing the centre directory.

8.viii Summary

  • ( For a fun and easy explanation of paging, you may want to read about The Paging Game. )

The Relocation Register Holds The Value -83968, How Many Kilobytes Was It Moved?,

Source: https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/8_MainMemory.html

Posted by: gomezress1993.blogspot.com

0 Response to "The Relocation Register Holds The Value -83968, How Many Kilobytes Was It Moved?"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel