Notes on uOttawa's operating systems course.
I took CSI3131: Operating Systems during the Winter 2019 Semester. The notes below reflect my learning over the duration of the course. Each section anchor is hyperlinked in the table of contents above. References and footnotes are present1 and can be found at the end of the document.
Operating Systems covers the design and implementation of basic operating systems, and provides students with hands-on experience solving OS-level problems involving processes, threading, memory management and Input/Output. Section A is being taught by Burak Kantarci ([email protected])
A mark of 50% must be scored on the midterm and final to pass.
If midterm * .20
and final * .50
is greater than 35%, then the mark is equal to:
Assignments: 15%
Labs: 15%
Midterm: 20%
Final Exam: 50%
Bonus Quizzes: +5%
Otherwise, the grade is calculated with assignments, labs and bonus ignored. Failure is likely.
Primary:
Title: Operating System Concepts/Essentials
Authors: Silberchatz, Galvin, Gange
Publisher: Wiley, 2011
Secondary:
Title: Operating Systems: Internals and Design Principles
Author: William Stallings
Edition: Fourth
Title: Applied Operating System Concepts
Author: A. Silberschatz et al.
Publisher: Wiley, 2000
Date | Event |
---|---|
2019-02-11 | Assignment 1 due |
2019-02-26 | Midterm exam |
Winter 2019 lectures run from January 7th to April 5th; the second week of the year through to the end of week fourteen. Lecture notes are labeled according to week, then by day/lecture. My schedule for this class/year is as follows:
Tuesday 1600h - Lecture in Morisset Hall 218
Tuesday 1730h - Laboratory in SITE 2052
Thursday 1430h - Lecture in Morisset Hall 218
Thursday 1430h - Tutorial in Simard Hall 425
Each week entry is filled with the material that will be covered in the coming week, to review on the weekend-of. Each event entry should contain a summary of the learning and notes.
MathJax has been included for the presentation of formulae and mathematics, tutorial here.
It can be written inline, like so: $$ A \times B = C $$ - pretty neat. The primary means of inclusion will probably be in a block, to present formulae:
$$ v_{O} = IR = \left( \frac{R}{ R + r_{D}} \right) (v_{s} + V_{DO}) $$
Might be good to glance though the first chapter or two of the textbook in preparation for the winter semester.
Chapter One is an introduction and overview of computing environments.
Chapter Two is on Operating-System Structures
Solutions to the practice questions and extra materials are available at http://www.os-book.com.
The first OS lecture covered the syllabus, followed by simple computer architecture and threading.
Labs will not be graded, but attendance is highly recommended. There will be four assignments.
I/O is short for Input/Output. Common implementations:
Today’s CSI lecture discussed parent and child processes, including use of the fork()
command (paired with wait()
and exit
) to orchestrate a simple parent+child process.
No lab this week.
Generally, this week has been discussing processes. The following concepts are earmarked as important:
Job, task and process are used interchangeably during the lectures.
Process states:
Process information is saved in a PCB (Process Control Block) which includes all of the context for a process in the waiting, ready, or running state. When program execution is put on hold, the PCB is saved so execution can resume with CPU registers in the same state at a later time:
Given a list of I/O, System and User programs, a Scheduler must be utilized to determine the order in which waiting tasks are executed.
Process Scheduling Queues can be used. Job (all processes), Ready (processes in main memory) and Device (I/O) queues separate processes and run them based on queue priority. In this scenario, Long Term Schedulers are responsible for moving jobs from the Job (new state) to Ready queue (ready state), and the Short Term Scheduler manages programs in the main memory, choosing to make them run or wait. Medium Term Schedulers may also be used to swap processes from the main memory to disk (Swap Space).
/*
* rcf016_fork.c - Experimental program that uses UNIX sys calls.
* Copyright (C) 2018 Ryan Fleck under the GNU GPLv3.
* Compiled & tested using GCC.
*/
#include<stdio.h>
#include<unistd.h> // Provides fork()
#include<stdlib.h> // Provides exit()
// sys/types and sys/wait provides wait()
#include <sys/types.h>
#include <sys/wait.h>
int main(int argc, char **argv)
{
puts("I am a process, about to fork().");
int pid = fork(); //Forks _at this point_ in execution.
if( pid == 0 ) {
puts("I am child.");
exit(0);
} else {
int status;
wait(&status);
printf("I am parent. Child exited with status %i.",status);
}
return 0;
}
The DGD iterated through probable quiz/midterm questions. I’ll pay more attention and take better notes next time.
What is the purpose of an OS?
->
Supervision and manage processes.Feb 11: Assignment 1 due. Snow days canceled the majority of CSI class this week.
fsck is used to check and optionally repair one or more Linux file systems. filesys can be a device name (e.g. /dev/hdc1, /dev/sdb2), a mount point (e.g. /, /usr, /home), or an ext2 label or UUID specifier (e.g. UUID=8868abf6-88c5-4a83-98b8-bfc24057f7bd or LABEL=root). Normally, the fsck program will try to handle filesystems on different physical disk drives in parallel to reduce the total amount of time needed to check all of the filesystems.2
Reading week. Midterm preparation. Date: Feb 26. Covers:
Module-1: Operating System Overview
Module-2: Processes
Module-3: Threads
Module-4: CPU Scheduling
Module-5: Process and Thread Synchronization
--until Semaphores (exclusive; semaphores not included)
Approach:
Cache is very fast CPU memory. When requesting data, the processor will check the cache for a word, and pull a block with the word from main memory if not present.
Access Time - how long does it take to bring a referenced word into the processor?
Types:
Programmed IO
Interrupt-Driven IO
Direct Memory Access
Chapter 3 of OS Concepts Essentials, Silberchatz.
malloc()
and free()
)int
, double
, structs)4fork()
ed.int pid = fork();
int execl(const char *path, const char *arg, ...);
exit
- call made by the program, graceful exit.kill
- pid can be called to kill an unwanted process.terminateProcess
- like kill, but on Winblows.1
)wait()
in parent.send
and recieve
operations.0
1
2
fork()
ing, and opening/closing the pipe on the correct ends.int file_descriptors[2]; // Create arr to save fdrs
pipe( file_descriptors ); // Create pipe, save fdrs into arr
mkfifo()
creates named pipe, better than regular pipes… how? No idea.02 SIGINT
interrupts processes.09 SIGKILL
aka KILL-9
, terminates process, see Monzy vid.201.12.99.14:696
// On the client.
val = server.methodName( A, B );
// Meanwhile, on the server...
boolean methodName( Object a, Object b ){
// Operations.
}
Chapter 4 of OS Concepts Essentials, Silberchatz.
Libraries
PThreads: POSIX standard API for kthread init/sync.
int main {
int id[6];
for(i=0; i<6; i++){
id[i] = i;
pthread_create( &tid[i], &attr, worker, &id[i]);
}
return 0;
}
void * worker(){
printf("I am a Worker Thread");
pthread_exit(0);
}
Java Threads: extend Thread class or implement Runnable for uthreads.
class Worker extends Thread{
public void run() {
System.out.println("I Am a Worker Thread");
}
}
public class First{
public static void main(String args[]) {
Worker x = new Worker();
x.start();
System.out.println("I Am The Main Thread");
}
}
thread.join();
thread.interrupt();
ThreadLocal
to create thread-specific data.Implementations
clone()
sets full/little sharing.Chapter 6 of OS Concepts Essentials, Silberchatz.
There will be questions about the above optimization scenarios. The following metrics will need to be provided, along with averages:
90%
)4/23
)3us
)Generally:
Preemptive:
Basic
FCFS: First come, first served.
SJF: Shortest Job First - two cases:
NonPreemptive:
Preemptive:
CPU bursts interrupted when a shorter burst-len task arrives.
Gives minimum average waiting time.
Burst times estimated with exponential averaging:
$$ \begin{align} \tau_{n+1} & = \alpha t_n + (1 - \alpha)\tau_n \ \tau_{n+1} & = \alpha t_n + (1 - \alpha)^1 \alpha t_{n-1} + (1 - \alpha)^2 \alpha t_{n-2} … \end{align} $$
Priority
Round Robin
n
processes get 1/n
CPU time each per q
time units.q
means first come, first serve.q
will cause thrashing7.q
should be larger than the time to:Multiple Queues
PTHREAD_SCOPE_PROCESS
makes pthreads user-level threads.PTHREAD_SCOPE_SYSTEM
binds each user-level thread to a processor.yield();
to opt out.setPriority()
.Chapter 5 of OS Concepts Essentials, Silberchatz.
Transplanted from [Java Manual]({{ site.url }}/java#semaphores).
A Semaphore is a data structure used to address synchronization problems. It can be used in a variety of patterns. Essentially, a semaphore is a number that can be incremented or decremented, but not read, and the value of the semaphore dictates if a thread can continue operating, or must wait. The rules are well defined in the Little Book of Semaphores8 (This numbered list of rules is copied from the text.):
A basic implementation of a semaphore in Java appears as follows, utilizing the built-in Thread
library for wait()
and notify()
to stop and start the threads.
class Semaphore{
private int count;
public Semaphore( int count ){
this.count = count;
}
synchronized public void wait()
throws InterruptedException{
count--;
if( count < 0 ) wait();
}
synchronized public void signal()
throws InterruptedException{
count++;
notify();
}
}
Wait and signal are used in a number of different ways. At this point, it is best to discuss some common patterns to show how semaphores work, and when to apply them. In the subsections below, threads are labeled A, B, C... N
, and Semaphores are sx, sy, sz... n
or sa, sb, sc
when created to deal with a specific thread.
When thread A
requires thread B
to complete an action before it can continue, it must wait until thread B
sends a signal. This ensures that A will never dostuff()
before B
does.
Semaphore sx = new Semaphore(1);
/* Thread A */
sx.wait();
doStuff();
/* Thread B */
doStuff();
sx.signal();
When thread A
and B
need to wait for each other, and cannot continue to execute until both finish certain commands. Neither thread can proceed until they reach a given point. To implement this, ensure each thread signals as it arrives, and is placed into the thread queue as count is below zero. The second thread to signal()
will call wait()
on the first thread, which will call wait()
on the second thread, and both can continue to dostuff2()
, though the order is not guaranteed.
// Tracks if A is ready.
Semaphore saReady = new Semaphore(0);
// Tracks if B is ready.
Semaphore sbReady = new Semaphore(0);
/* Thread A */
doStuff();
saReady.signal();
sbReady.wait();
doStuff2();
/* Thread B */
doStuff();
sbReady.signal();
saReady.wait();
doStuff2();
Short for Mutual Exclusion, ensures only one thread can execute the code in a crital section concurrently. A very large number of threads can operate concurrently using this model, and it is guaranteed that only one will ever doCriticalStuff()
at any given moment.
Semaphore sx = new Semaphore(1);
/* Thread N */
sx.wait();
doCriticalStuff();
sx.signal();
The Multiplex pattern allows a set number of threads to enter a critical path concurrently. This pattern is identical to the Mutex pattern, but the Semaphore is instatiated with value n
as count, where n
is the thread limit.
Semaphore sx = new Semaphore(n);
/* Thread N */
sx.wait();
doCriticalStuff();
sx.signal();
An n-threaded generalization of the Rendezvous pattern. All threads will be blocked until the nth thread arrives, and then all can continue simultaneously. The solution incorporates a turnstile where the semaphore is rapidly decremented, then incremented, allowing each thread to pass through after the nth thread arrives. Unfortunately, this barrier pattern can only be used once as the turnstile does not reset itself.
// Mutex used to update the thread count.
Semaphore mutex = new Semaphore(1);
int count = 0;
// Barrier used to count incoming threads.
Semaphore barrier = new Semaphore(0); // Init as locked.
/* Thread N */
mutex.wait();
count++;
mutex.signal();
// Makes the barrier one to enable turnstile.
if( count == n ) barrier.signal();
// Turnstile occurs.
barrier.wait();
barrier.signal();
doStuff();
Threads wait before and after executing the critical section, in order to ensure no threads lap the others. Only one barrier is open at a time. When count reaches n, barrierB
is locked and barrierA
is opened, and vice versa. Locking/unlocking the barriers involves incrementing the semaphore once so it can turnstile when all the threads arrive.
// Mutex used to update the thread count.
Semaphore mutex = new Semaphore(1);
int count = 0;
// Barrier used to count incoming threads.
Semaphore barrierA = new Semaphore(0); // Init as locked.
Semaphore barrierB = new Semaphore(1); // Init as open.
/* Thread N */
mutex.wait();
count++;
if( count == n ){
barrierB.wait();
barrierA.signal();
}
mutex.signal();
barrierA.wait();
barrierA.signal();
doStuff(); // Critical point.
mutex.wait();
count--;
if( count == 0 ){
barrierA.wait();
barrierB.signal();
}
mutex.signal();
barrierB.wait();
barrierB.signal();
A queue ensures threads of different types proceeed in pairs.
Semaphore typeX = new Semaphore(0);
Semaphore typeY = new Semaphore(0);
/* Thread A of type X */
typeY.signal();
typeX.wait();
doStuff();
/* Thread B of type Y */
typeX.signal();
typeY.wait();
doStuff();
Final preparation. Date: Feb 10. Most questions from chapters 5-12.
Major topics:
Minor topics:
How can we programmers guarantee that a thread or process has exclusive access to a set of resources in a ‘critical section’ of the code? A good solution must meet a few requirements. Mutual Exclusion must be attained, where only a single thread is executing the critical section simultaneously. The CS must be accessible to all threads that need to enter it, and deadlock should be avoided. Sharing is caring, so Bounded Waiting must be implemented to eliminate famine.
The Critical Section Problem can be solved with software, hardware or higher level software abstractions like semaphores and monitors.
Ensures two processes cannot enter the critical section simultaneously. A valid solution to the Critical Section Problem.
/* Task i: */
while(true) {
flag[i]=true;
turn=j;
while (flag[j]==true && turn==j){/* Busy-Waiting */} ;
/* Critical Section */
flag[i]=false; // I no longer want in
/* Remainder Section */
}
Monitors are implemented using Semaphores, ensure mutual exclusion, and also contains procedures and local data.
avoidance, prevention, allowing deadlocks then detection + recovery, ignoring the problem and pretending that deadlocks never occur in the system, recovery strategies.
fixed partitioning, dynamic partitioning, placement algorithms
address translation, page tables, how are they used and implemented, TLB, Effective Access Time (hierarchical, hashed, inverted page table)
segment table, & in combination with paging
First in first out, a circular buffer is used to keep a pointer to the next frame to be replaced.
Least recently used, implements a linked list to test which frame was used the least recently.
Implements a FIFO circular buffer, but entires have a use bit. The pointer skips an entry and turns the ‘use’ bit off if the use bit is present when attempting to replace a frame.
Thrashing, what and how to deal with.
Resident set Management
Page Fault Rate Stragegy
ToDo
ToDo
Interesting experiments that occur as a result of knowledge gained in the classroom will be documented here, if any are completed.
Footnotes are used by placing [^ref]
where a superscript number should be placed, and [^ref]: explanation
can be used in-place or at the end of the document. All referenced footnotes will be collected and placed at the end of the document. More info. ↩︎
linux.die.net, fsck - check and repair a Linux file system ↩︎
CSI3131 Slides - Module 1 - Computer Systems Overview ↩︎
Memory: Stack vs Heap, GribbleLab ↩︎
Copy on Write ↩︎
Abbreviation: vv => vice versa. ↩︎
Thrashing - When the CPU is trying to process too many programs simultaneously, and constant context switching and swapping prevents any real work from being completed. ↩︎
Little Book of Semaphores more info needed. ↩︎
Title: CSI3131: Operating Systems
Word Count: 5727 words
Reading Time: 27 minutes
Permalink:
→
https://manuals.ryanfleck.ca/csi3131/