Week 28 (Week 3 of CST334)
This week we learned about how operating systems manage memory, starting with program addresses, address space, and how virtual memory creates virtual addresses when code is compiled that checks for address validity, and translates it to a physical address at run time.
Next we learned about the C language's use of dynamically-allocated memory, and its differences from Java. We then ran through examples of the different memory allocation functions common to C: malloc() and free(), which is required to avoid memory leaks. The big difference here being that in Java, we have garbage collection mechanisms that avoid this memory leakage for us, but C gives us better control over how memory is allocated--if that is important to development.
The we learned about address translation, and the basics of base-and-bounds as a way of determining how a process's virtual memory fits into a system's physical memory space. The main idea here being that compiled code uses virtual addresses, which are translated into physical addresses at run time, and thus, if a program tries to access memory outside of its virtual address space, a trap occurs, creating a solution to the security systems we've been needing to address from previous lessons.
Base-and-bounds usage assumes that a process's address space is contiguous in physical memory, that it is smaller than the physical memory (otherwise that would be an entirely new problem) so it can be contained within it, and lastly that all user virtual address spaces are the same size.
In the base-and-bounds setup, the base is the physical address of virtual address 0, and the bounds are the size of the virtual address space. The memory management unit (MMU) has base-and-bounds registers for these values/locations, which are processed before an operating system runs a process, and it puts the base and bounds values into the registers.
The problem with the assumptions we made is that they aren't always necessarily true. Most processes also don't use their entire address space, so this can lead to a lot of wasted memory that is allocated for the process. Segmentation then can be used to address the wasted space, while adding additional protection for the process by assigning permissions to each segment of memory that is allocated to the heap and stack. This also assigns read-only permissions to code, so that the process can't cause any harm to the code itself. But segmentation, alas, also leads to fragmentation issues.
The last topic we covered this week was paging, which is an alternative method to segmentation, which avoids fragmentation by breaking virtual addresses up into sections of equal size, but requires big data structures for translation, which won't fit into MMU registers, and the translation process is not simple, so it is very slow.
I personally found the topic somewhat easy to follow, and see how they addressed some of the issues we've discussed in the past couple of weeks due to speed, simplicity, and safety for the code and processes, so it seems like we're narrowing down the problems as we learn about more tools to address them.
I struggled the most with the virtual memory lab this week: calculating the physical addresses, mostly. I struggled a little bit with converting hexadecimal to binary, and vice versa, but once I got the hang of it, found it to be fun. This was similar to my experience last week of determining the TATs for the different scheduling methodologies, however I did a bit better with the conversions this week.
Comments
Post a Comment