Ryan's Manuals

CSI3131: Operating Systems

Notes on uOttawa's operating systems course.


Contents



Preamble

I took CSI3131: Operating Systems during the Winter 2019 Semester. The notes below reflect my learning over the duration of the course. Each section anchor is hyperlinked in the table of contents above. References and footnotes are present1 and can be found at the end of the document.


Syllabus

Operating Systems covers the design and implementation of basic operating systems, and provides students with hands-on experience solving OS-level problems involving processes, threading, memory management and Input/Output. Section A is being taught by Burak Kantarci ([email protected])

Grading

A mark of 50% must be scored on the midterm and final to pass.

If midterm * .20 and final * .50 is greater than 35%, then the mark is equal to:

Assignments:   15%
Labs:          15%
Midterm:       20%
Final Exam:    50%
Bonus Quizzes: +5%

Otherwise, the grade is calculated with assignments, labs and bonus ignored. Failure is likely.

Textbooks

Primary:

Title: Operating System Concepts/Essentials
Authors: Silberchatz, Galvin, Gange
Publisher: Wiley, 2011

Secondary:

Title: Operating Systems: Internals and Design Principles
Author: William Stallings
Edition: Fourth
Title: Applied Operating System Concepts
Author: A. Silberschatz et al.
Publisher: Wiley, 2000

Important Dates & Deadlines

DateEvent
2019-02-11Assignment 1 due
2019-02-26Midterm exam

Lecture Notes

Winter 2019 lectures run from January 7th to April 5th; the second week of the year through to the end of week fourteen. Lecture notes are labeled according to week, then by day/lecture. My schedule for this class/year is as follows:

Tuesday  1600h - Lecture in Morisset Hall 218
Tuesday  1730h - Laboratory in SITE 2052
Thursday 1430h - Lecture in Morisset Hall 218
Thursday 1430h - Tutorial in Simard Hall 425

Pre-Lecture Notes

Each week entry is filled with the material that will be covered in the coming week, to review on the weekend-of. Each event entry should contain a summary of the learning and notes.

MathJax has been included for the presentation of formulae and mathematics, tutorial here.

It can be written inline, like so: $$ A \times B = C $$ - pretty neat. The primary means of inclusion will probably be in a block, to present formulae:

$$ v_{O} = IR = \left( \frac{R}{ R + r_{D}} \right) (v_{s} + V_{DO}) $$

Might be good to glance though the first chapter or two of the textbook in preparation for the winter semester.

Chapter One is an introduction and overview of computing environments.

Chapter Two is on Operating-System Structures

Solutions to the practice questions and extra materials are available at http://www.os-book.com.

W02 January 06-12

Tuesday 1600h - Lecture in Morisset Hall 218

The first OS lecture covered the syllabus, followed by simple computer architecture and threading.

Labs will not be graded, but attendance is highly recommended. There will be four assignments.

I/O is short for Input/Output. Common implementations:

  1. Programmed I/O is inefficient, and constantly polls for input.
  2. Interrupt-Driven I/O - CPU takes a few cycles to process interrupt when it occurs.
  3. DMA or Direct Memory Access - Bypasses CPU for most of operation. (CPU is still required to address blocks for the I/O operation.)

W03 January 13-19

Tuesday 1600h - Lecture in Morisset Hall 218

Today’s CSI lecture discussed parent and child processes, including use of the fork() command (paired with wait() and exit) to orchestrate a simple parent+child process.

No lab this week.

Thursday 1430h - Lecture in Morisset Hall 218

Generally, this week has been discussing processes. The following concepts are earmarked as important:

  1. Process states, including switching.
  2. The PCB (Process Control Block) - What it does, stores.
  3. Process scheduling.
  4. Operations on processes.
  5. Mechanisms for process cooperation. (IPC)

Job, task and process are used interchangeably during the lectures.

Process states:

  1. New - Process is being created & loading.
  2. Waiting - Process requires some event to occur.
  3. Ready - Process is not running, but ready to be assigned to run.
  4. Running - Process is executing instructions.
  5. Terminated - Execution has finished.

Process information is saved in a PCB (Process Control Block) which includes all of the context for a process in the waiting, ready, or running state. When program execution is put on hold, the PCB is saved so execution can resume with CPU registers in the same state at a later time:

  1. Process State
  2. Process Number
  3. Program Counter (PC)
  4. Registers
  5. Memory Limits
  6. List of Open Files

Given a list of I/O, System and User programs, a Scheduler must be utilized to determine the order in which waiting tasks are executed.

Process Scheduling Queues can be used. Job (all processes), Ready (processes in main memory) and Device (I/O) queues separate processes and run them based on queue priority. In this scenario, Long Term Schedulers are responsible for moving jobs from the Job (new state) to Ready queue (ready state), and the Short Term Scheduler manages programs in the main memory, choosing to make them run or wait. Medium Term Schedulers may also be used to swap processes from the main memory to disk (Swap Space).

/*
 *  rcf016_fork.c - Experimental program that uses UNIX sys calls.
 *  Copyright (C) 2018 Ryan Fleck under the GNU GPLv3.
 *  Compiled & tested using GCC.
 */

#include<stdio.h>
#include<unistd.h>   // Provides fork()
#include<stdlib.h>   // Provides exit()
// sys/types and sys/wait provides wait()
#include <sys/types.h>
#include <sys/wait.h>

int main(int argc, char **argv)
{
    puts("I am a process, about to fork().");
    int pid = fork(); //Forks _at this point_ in execution.

    if( pid == 0 ) {
        puts("I am child.");
        exit(0);
    } else {
        int status;
        wait(&status);
        printf("I am parent. Child exited with status %i.",status);
    }
    return 0;
}

Thursday 1430h - Tutorial in Simard Hall 425

The DGD iterated through probable quiz/midterm questions. I’ll pay more attention and take better notes next time.

What is the purpose of an OS?

W04 January 20-26

W05 January 27 - February 02

W06 February 03-09

W07 February 10-16

Feb 11: Assignment 1 due. Snow days canceled the majority of CSI class this week.

Piping

Forking

Fscking

fsck is used to check and optionally repair one or more Linux file systems. filesys can be a device name (e.g. /dev/hdc1, /dev/sdb2), a mount point (e.g. /, /usr, /home), or an ext2 label or UUID specifier (e.g. UUID=8868abf6-88c5-4a83-98b8-bfc24057f7bd or LABEL=root). Normally, the fsck program will try to handle filesystems on different physical disk drives in parallel to reduce the total amount of time needed to check all of the filesystems.2

W08 February 17-23

Reading week. Midterm preparation. Date: Feb 26. Covers:

Module-1: Operating System Overview
Module-2: Processes
Module-3: Threads
Module-4: CPU Scheduling
Module-5: Process and Thread Synchronization
 --until Semaphores (exclusive; semaphores not included)

Approach:

  1. Glance over slides and take light notes.
  2. Look over practice material.
  3. Fill knowledge gaps by reading textbook.

Notes on Module 1: Operating System Overview

Computer System Architecture

Interrupts

IO - Input & Output

Operating System


Notes on Module 2: Processes

Chapter 3 of OS Concepts Essentials, Silberchatz.

Scheduling

Operations

Inter-Process Communication


Notes on Module 3: Threads

Chapter 4 of OS Concepts Essentials, Silberchatz.

Multithreading Models

Using Threads

Threading Examples


Notes on Module 4: CPU Scheduling

Chapter 6 of OS Concepts Essentials, Silberchatz.

There will be questions about the above optimization scenarios. The following metrics will need to be provided, along with averages:

Generally:

Preemptive:

Algorithms

Advanced Topics

Examples


Notes on Module 5: Process and Thread Synchronization

Chapter 5 of OS Concepts Essentials, Silberchatz.

Peterson’s Solution

Hardware Solutions

Monitors

Semaphores

Transplanted from [Java Manual]({{ site.url }}/java#semaphores).

A Semaphore is a data structure used to address synchronization problems. It can be used in a variety of patterns. Essentially, a semaphore is a number that can be incremented or decremented, but not read, and the value of the semaphore dictates if a thread can continue operating, or must wait. The rules are well defined in the Little Book of Semaphores8 (This numbered list of rules is copied from the text.):

  1. When you create the semaphore, you can initialize its value to any integer, but after that the only operations you are allowed to perform are increment (increase by one) and decrement (decrease by one). You cannot read the current value of the semaphore.
  2. When a thread decrements the semaphore, if the result is negative, the thread blocks itself and cannot continue until another thread increments the semaphore.
  3. When a thread increments the semaphore, if there are other threads waiting, one of the waiting threads gets unblocked.

A basic implementation of a semaphore in Java appears as follows, utilizing the built-in Thread library for wait() and notify() to stop and start the threads.

class Semaphore{

    private int count;

    public Semaphore( int count ){
        this.count = count;
    }

    synchronized public void wait()
    throws InterruptedException{
        count--;
        if( count < 0 ) wait();
    }

    synchronized public void signal()
    throws InterruptedException{
        count++;
        notify();
    }
}

Wait and signal are used in a number of different ways. At this point, it is best to discuss some common patterns to show how semaphores work, and when to apply them. In the subsections below, threads are labeled A, B, C... N, and Semaphores are sx, sy, sz... n or sa, sb, sc when created to deal with a specific thread.

Signaling

When thread A requires thread B to complete an action before it can continue, it must wait until thread B sends a signal. This ensures that A will never dostuff() before B does.

Semaphore sx = new Semaphore(1);

/*  Thread A  */
    sx.wait();
    doStuff();

/*  Thread B  */
    doStuff();
    sx.signal();

Rendezvous

When thread A and B need to wait for each other, and cannot continue to execute until both finish certain commands. Neither thread can proceed until they reach a given point. To implement this, ensure each thread signals as it arrives, and is placed into the thread queue as count is below zero. The second thread to signal() will call wait() on the first thread, which will call wait() on the second thread, and both can continue to dostuff2(), though the order is not guaranteed.

// Tracks if A is ready.
Semaphore saReady = new Semaphore(0);

// Tracks if B is ready.
Semaphore sbReady = new Semaphore(0);

/*  Thread A  */
    doStuff();
    saReady.signal();
    sbReady.wait();
    doStuff2();

/*  Thread B  */
    doStuff();
    sbReady.signal();
    saReady.wait();
    doStuff2();

Mutex

Short for Mutual Exclusion, ensures only one thread can execute the code in a crital section concurrently. A very large number of threads can operate concurrently using this model, and it is guaranteed that only one will ever doCriticalStuff() at any given moment.

Semaphore sx = new Semaphore(1);

/*  Thread N  */
    sx.wait();
    doCriticalStuff();
    sx.signal();

Multiplex

The Multiplex pattern allows a set number of threads to enter a critical path concurrently. This pattern is identical to the Mutex pattern, but the Semaphore is instatiated with value n as count, where n is the thread limit.

Semaphore sx = new Semaphore(n);

/*  Thread N  */
    sx.wait();
    doCriticalStuff();
    sx.signal();

Barrier

An n-threaded generalization of the Rendezvous pattern. All threads will be blocked until the nth thread arrives, and then all can continue simultaneously. The solution incorporates a turnstile where the semaphore is rapidly decremented, then incremented, allowing each thread to pass through after the nth thread arrives. Unfortunately, this barrier pattern can only be used once as the turnstile does not reset itself.

// Mutex used to update the thread count.
Semaphore mutex = new Semaphore(1);
int count = 0;

// Barrier used to count incoming threads.
Semaphore barrier = new Semaphore(0); // Init as locked.

/*  Thread N  */
    mutex.wait();
    count++;
    mutex.signal();

    // Makes the barrier one to enable turnstile.
    if( count == n ) barrier.signal();

    // Turnstile occurs.
    barrier.wait();
    barrier.signal();

    doStuff();

Two-Phase Barrier

Threads wait before and after executing the critical section, in order to ensure no threads lap the others. Only one barrier is open at a time. When count reaches n, barrierB is locked and barrierA is opened, and vice versa. Locking/unlocking the barriers involves incrementing the semaphore once so it can turnstile when all the threads arrive.

// Mutex used to update the thread count.
Semaphore mutex = new Semaphore(1);
int count = 0;

// Barrier used to count incoming threads.
Semaphore barrierA = new Semaphore(0); // Init as locked.
Semaphore barrierB = new Semaphore(1); // Init as open.

/*  Thread N  */
    mutex.wait();
        count++;
        if( count == n ){
            barrierB.wait();
            barrierA.signal();
        }
    mutex.signal();

    barrierA.wait();
    barrierA.signal();

    doStuff(); // Critical point.

    mutex.wait();
        count--;
        if( count == 0 ){
            barrierA.wait();
            barrierB.signal();
        }
    mutex.signal();

    barrierB.wait();
    barrierB.signal();

Queue

A queue ensures threads of different types proceeed in pairs.

Semaphore typeX = new Semaphore(0);
Semaphore typeY = new Semaphore(0);

/*  Thread A of type X  */
    typeY.signal();
    typeX.wait();
    doStuff();

/*  Thread B of type Y  */
    typeX.signal();
    typeY.wait();
    doStuff();

W15 April 07-12

Final preparation. Date: Feb 10. Most questions from chapters 5-12.

Major topics:

Minor topics:

Notes on Modules 5 and 6: Syncronization and Deadlocks

The Critical Section Problem

How can we programmers guarantee that a thread or process has exclusive access to a set of resources in a ‘critical section’ of the code? A good solution must meet a few requirements. Mutual Exclusion must be attained, where only a single thread is executing the critical section simultaneously. The CS must be accessible to all threads that need to enter it, and deadlock should be avoided. Sharing is caring, so Bounded Waiting must be implemented to eliminate famine.

The Critical Section Problem can be solved with software, hardware or higher level software abstractions like semaphores and monitors.

Peterson’s Algorithm

Ensures two processes cannot enter the critical section simultaneously. A valid solution to the Critical Section Problem.

/* Task i: */
while(true) {
    flag[i]=true;
    turn=j;
    while (flag[j]==true && turn==j){/* Busy-Waiting */} ;
    /* Critical Section */
    flag[i]=false; // I no longer want in
    /* Remainder Section */
}
Banker’s Algorithm

Semaphores vs Monitors

Monitors are implemented using Semaphores, ensure mutual exclusion, and also contains procedures and local data.

Deadlock Conditions

Handling Deadlocks

avoidance, prevention, allowing deadlocks then detection + recovery, ignoring the problem and pretending that deadlocks never occur in the system, recovery strategies.

Notes on Module 7: Memory

Physical and Logical Address Spaces

Internal and External Fragmentation

Contiguous Allocation

fixed partitioning, dynamic partitioning, placement algorithms

Paging

address translation, page tables, how are they used and implemented, TLB, Effective Access Time (hierarchical, hashed, inverted page table)

Segmentation

segment table, & in combination with paging

Notes on Module 8: Virtual Memory

Paging

Page Faults

Page Fault Replacement Algorithms

Optimal
FIFO

First in first out, a circular buffer is used to keep a pointer to the next frame to be replaced.

LRU

Least recently used, implements a linked list to test which frame was used the least recently.

CLOCK

Implements a FIFO circular buffer, but entires have a use bit. The pointer skips an entry and turns the ‘use’ bit off if the use bit is present when attempting to replace a frame.

Page Buffering and Cleaning Policies

Allocation Issues

Thrashing, what and how to deal with.

Resident set Management

Page Fault Rate Stragegy

Load Control

Notes on Module 9: File Systems

Notes on Module 10: Mass Storage

ToDo

Notes on Module 11: Input & Output (I/O)

ToDo


Side Effects

Interesting experiments that occur as a result of knowledge gained in the classroom will be documented here, if any are completed.



  1. Footnotes are used by placing [^ref] where a superscript number should be placed, and [^ref]: explanation can be used in-place or at the end of the document. All referenced footnotes will be collected and placed at the end of the document. More info. ↩︎

  2. linux.die.net, fsck - check and repair a Linux file system ↩︎

  3. CSI3131 Slides - Module 1 - Computer Systems Overview ↩︎

  4. Memory: Stack vs Heap, GribbleLab ↩︎

  5. Copy on Write ↩︎

  6. Abbreviation: vv => vice versa. ↩︎

  7. Thrashing - When the CPU is trying to process too many programs simultaneously, and constant context switching and swapping prevents any real work from being completed. ↩︎

  8. Little Book of Semaphores more info needed. ↩︎



Site Directory



Page Information

Title: CSI3131: Operating Systems
Word Count: 5727 words
Reading Time: 27 minutes
Permalink:
https://manuals.ryanfleck.ca/csi3131/