matos/docs/3_virtual_mem.md

66 lines
3.6 KiB
Markdown
Raw Normal View History

2022-07-29 00:16:57 +02:00
# Virtual Memory
## Introduction
The virtual memory is a memory management system that translate, on the fly, some "ideal" memory address to some real physical memory address.
It's well described on, for example, [wikipedia](https://en.wikipedia.org/wiki/Virtual_memory).
Here we choose to implement Virtual memory using pagination only. So we keep segmentation configured as a single `flat` model (c.f. `core/gdt.c`).
The paging is mainly implemented in `arch/ARCH/paging.c`.
## x86 and mirroring
We need a way to access and modify the record doing the translation `virtual memory -> physical memory` for the MMU.
On x86, this register is called the `PD` or Page Directory.
Once the paging is enabled on the CPU, we use a trick call `mirroring` to access and modify the PD and its content.
So now, we have a way to map a physical memory page to any given virtual page.
### PD implementation details
In a virtual addr(Vaddr), 10 first bit (MSB) are the index in the Page Directory.
A Page Directory Entry point to a Page Table. The 10 next bits are then an index in this Page Table. A Page Table entry then point to a physical address at which is added the remaining 12 bits.
So they are 1024 entry in the PD, each of them pointing to a PT of 1024 entry.
Each PTE pointing to 4K page.
First address (up to pageDesc from mem.c) are mapped such as Paddr == Vaddr. To make PD always accessible a (x86?) trick is used : The mirroring.
A given entry N in the PD point to the PD (this is possible because PDE very looks like PTE in x86). So N << (10 + 12 = 4Mo) point to the Paddr of PD.
Then, accessing N * 4Mo + I * 4Ko is accessing the PT of the Ieme entry in the PD (as MMU take the PD pointed by the PDE number N like a PT).
More particularly, accessing N * 4Mo + N * 4ko is accessing the PD.
PD is at Vaddr N * 4Mo and take 4ko. Each PT are allocated dynamically.
Just make sure that N have not been used by identity mapping
## Virtual memory allocators
We will setup 2 different virtual memory allocator:
* `allocArea` for large memory area of several PAGE_SIZE size
* `alloc` for object of any size
With each needing the other (`allocArea` need `alloc` to allocate `struct memArea` to keep track of the area and `alloc` need `allocArea` to allocate large memory size) we need to make sure that there is no forever loop.
### `AllocArea`
An area is represented by `struct memArea` and basically consist of a virtual memory address and a size if number of page.
This is a simple allocator keeping 2 linked list of used/free area.
Allocating a new area (thanks to `areaAlloc()`) consist of:
1. finding an area big enough in the free list.
2. split it if the found area is too large.
3. put the found area in the user area list.
4. optionally map the area to physical page
Freeing an area (thank to `areaFree()`) consist of trying to find adjacent free area and merge them with this one or just adding the area to the free one.
### `Alloc`
For large object allocation ( > PAGE_SIZE), this a basically a call to `areaAlloc()`.
For smaller object, this is a more complex allocator based on the concept of slab.
The allocator is configured at startup to maintain slab for allocating object of a given size.
So a slab is a linked list of `struct slabEntry` having each of the entries pointing to one or several memory area that is divided in chunk of the slab configured size.
It's a linked list so it is each to add a new `struct slabEntry` when one is full.
Inside a `struct slabEntry` the `freeEl` attribute point to the next free chunk (of the slab configured size) in the allocated page(s). At this address is also a pointer to the next free area in this area.