matos/docs/3_virtual_mem.md

97 lines
5.2 KiB
Markdown
Raw Permalink Normal View History

2022-07-29 00:16:57 +02:00
# Virtual Memory
## Introduction
The virtual memory is a memory management system that translate, on the fly, some "ideal" memory address to some real physical memory address.
It's well described on, for example, [wikipedia](https://en.wikipedia.org/wiki/Virtual_memory).
Here we choose to implement Virtual memory using pagination only. So we keep segmentation configured as a single `flat` model (c.f. `core/gdt.c`).
The paging is mainly implemented in `arch/ARCH/paging.c`.
## x86 and mirroring
We need a way to access and modify the record doing the translation `virtual memory -> physical memory` for the MMU.
On x86, this register is called the `PD` or Page Directory.
Once the paging is enabled on the CPU, we use a trick call `mirroring` to access and modify the PD and its content.
So now, we have a way to map a physical memory page to any given virtual page.
### PD implementation details
In a virtual addr(Vaddr), 10 first bit (MSB) are the index in the Page Directory.
A Page Directory Entry point to a Page Table. The 10 next bits are then an index in this Page Table. A Page Table entry then point to a physical address at which is added the remaining 12 bits.
So they are 1024 entry in the PD, each of them pointing to a PT of 1024 entry.
Each PTE pointing to 4K page.
First address (up to pageDesc from mem.c) are mapped such as Paddr == Vaddr. To make PD always accessible a (x86?) trick is used : The mirroring.
A given entry N in the PD point to the PD (this is possible because PDE very looks like PTE in x86). So N << (10 + 12 = 4Mo) point to the Paddr of PD.
Then, accessing N * 4Mo + I * 4Ko is accessing the PT of the Ieme entry in the PD (as MMU take the PD pointed by the PDE number N like a PT).
More particularly, accessing N * 4Mo + N * 4ko is accessing the PD.
2022-07-30 15:59:50 +02:00
PD is at Vaddr N * 4Mo and take 4ko. Each PT is allocated dynamically.
Just make sure that N have not been used by identity mapping.
2022-07-29 00:16:57 +02:00
## Virtual memory allocators
2022-07-30 15:59:50 +02:00
We will setup 2 different virtual memory allocators:
2022-07-29 00:16:57 +02:00
* `allocArea` for large memory area of several PAGE_SIZE size
* `alloc` for object of any size
With each needing the other (`allocArea` need `alloc` to allocate `struct memArea` to keep track of the area and `alloc` need `allocArea` to allocate large memory size) we need to make sure that there is no forever loop.
### `AllocArea`
An area is represented by `struct memArea` and basically consist of a virtual memory address and a size if number of page.
This is a simple allocator keeping 2 linked list of used/free area.
Allocating a new area (thanks to `areaAlloc()`) consist of:
2022-07-30 15:59:50 +02:00
1. Finding an area big enough in the free list.
2. Split it if the found area is too large.
3. Put the found area in the user area list.
4. Optionally map the area to physical pages.
2022-07-29 00:16:57 +02:00
Freeing an area (thank to `areaFree()`) consist of trying to find adjacent free area and merge them with this one or just adding the area to the free one.
### `Alloc`
For large object allocation ( > PAGE_SIZE), this a basically a call to `areaAlloc()`.
For smaller object, this is a more complex allocator based on the concept of slab.
2022-07-30 15:59:50 +02:00
The allocator is configured at startup to maintain a collection of slabs, each of them configured for allocating object of a given size.
So a slab is described by `struct slabDesc` and contains a linked list of `struct slabEntry`.
Each of the `struct slabEntry` entry point to a memory area that is divided in chunk of the slab configured size.
It's a linked list so it is easy to add a new `struct slabEntry` when one is full.
Inside a `struct slabEntry` the `freeEl` attribute point to the next free chunk (of the slab configured size) in the allocated page(s). At this address is also a pointer to the next free area in this area.
.-----------------------------.
| struct slabEntry { |
| vaddr_t page; |-------------------->.---------------------.
| void *freeEl; |-------------. |.------------------. |
| size_t size; | | || allocated chunk | |
| bool_t full; | | |.-------------------.|
| struct slabEntry *next; || -------->| addr of next free |----.
| struct slabEntry *prev; || |.------------------.'| |
| }; || || allocated chunk | | |
'-----------------------------'| |.------------------. | |
.------------------------------' || NULL |<----'
'-->.-----------------------------. |.------------------. |
| struct slabEntry { | || allocated chunk | |
| vaddr_t page; | |'------------------' |
| void *freeEl; | '---------------------'
| size_t size; |
| bool_t full; |
| struct slabEntry *next; ||
| struct slabEntry *prev; ||
| }; ||
'-----------------------------'|
.----------------------------------'
'-->.-----.
| ... |
'-----'
So there is one slab for object of 4B, one for 126B ...
The linked-list containing all the slab (`struct slabDesc`) is called the slub.
2022-07-29 00:16:57 +02:00