OS Midterm

studied byStudied by 132 people
5.0(3)
get a hint
hint

data register

1 / 115

Tags and Description

First established knowt in the history of crenstantinople

116 Terms

1

data register

small fast data storage location on the CPU (aka buffer register)

New cards
2

address register

specifies the address in memory for the next read or write

New cards
3

PC (program counter)

holds address of next instruction to be fetched

New cards
4

instruction register

stores the fetched instruction

New cards
5

interrupt

allows other modules to interrupt the normal sequencing of the processor

New cards
6

hit ratio

fraction of all memory accesses found in the cache

New cards
7

Principle of locality

memory references by processor tend to cluster.

New cards
8
<p>temporal locality</p>

temporal locality

limited range of memory addresses requested repeatedly over a period of time

New cards
9
<p>spatial locality</p>

spatial locality

memory addresses that are requested sequentially

New cards
10

cache

small, quick access storage close to the CPU used for repetitively accessed data or instructions. Modernly stored in 3 levels (L1, L2, L3)

New cards
11

memory hierarchy

system of memory levels balancing cost and capacity vs speed. Bigger = slower = cheaper

<p>system of memory levels balancing cost and capacity vs speed. Bigger = slower = cheaper</p>
New cards
12

volatile memory

memory that will be cleared when computer is powered off (ex: RAM)

New cards
13

purpose of interrupts

helpful for handling asynchronous events, multitasking, and error handling

New cards
14

interrupt classes

program (illegal instruction)

timer

I/O

hardware failure

New cards
15

interrupt handler

determines nature of interrupt and performs necessary actions

New cards
16

program flow with and without interrupts

program is able to execute separate instructions when waiting on something (like I/O)

<p>program is able to execute separate instructions when waiting on something (like I/O) </p>
New cards
17

multiple interrupt handling

Approach 1: Disable interrupts while processing an interrupt

Approach 2: Use a priority scheme

New cards
18

calculation of EAT (Effective Access Time)

Ts = H*T1 + (1-H)*(T1 + T2)

where

Ts = average access time

H = hit ratio

T1 = access time of M1 (cache)

T2 = access time of M2 (main memory)

New cards
19

Instruction execution order

fetch instruction, then execute

New cards
20

Operating System

interface between applications and hardware that controls the execution of programs

New cards
21

basic elements of a computer

  • processor

  • I/O modules

  • Main memory

  • System Bus

New cards
22

system bus

provides communication between computer components

New cards
23

I/O modules

move data between computer and external environment

  • secondary memory

  • communication equipment

  • terminal

New cards
24

programmed I/O

I/O module performs action and sets appropriate bits in I/O status register. processor periodically checks status of I/O module

New cards
25

Interrupt-Driven I/O

I/O module interrupts processor when ready to exchange data

New cards
26

Direct Memory Access (DMA)

performed by separate module on system bus or incorporated into I/O module

New cards
27

symmetric multiprocessors (SMP)

stand-alone computer system where

  • 2+ processors

  • processors share memory, access to I/O

  • system controlled by one OS

  • high performance/scaling/availability

New cards
28

kernel

contains the most frequently used OS instructions and other portions. The central component of the OS. Manages resources, processes, and memory

New cards
29

turnaround time

total time to execute a process

New cards
30

process switch

switching between processes, requires switching data within registers (aka context switch)

New cards
31

process

Instance of a program in execution; unit of activity that can be executed on a processor

New cards
32

3 components of a process

  • executable program

  • associated data needed by program

  • execution context

New cards
33

execution context

  • internal data OS can supervise/control

  • contents of registers

  • process state, priority, I/O wait status

New cards
34

5 OS management responsibilities

  • process isolation

  • automatic allocation + management

  • modular programming support

  • protection and access control

  • long-term storage

New cards
35

Application Binary Interface (ABI)

how compiler builds an application. Defines system call interface through user Instruction set architecture (ISA)

New cards
36

Instruction Set Architecture (ISA)

Contains set of executable instructions by CPU. Considered an interface

New cards
37

thread

a lightweight process that shares resources within a process. Dispatchable unit of work; includes a thread context

New cards
38

multithreaded process

process which can separate concurrent threads

New cards
39

multiprogramming

the ability to store processes in memory and switch execution between programs

New cards
40

degree of multiprogramming

number of concurrent processes allowed in main memory

New cards
41

goals of an OS

convenience

efficiency

evolution ability

manage computer resources

New cards
42

multitasking vs parallelism

multitasking executes multiple processes on one CPU by allocating each process CPU time. Parallel processing involves using multiple cores.

New cards
43

activities associated with processes

creation, execution, scheduling, resource management

New cards
44

virtual memory

allocated space for a program that has relative memory addresses

New cards
45

paging

system of fixed size blocks assigned to processes

<p>system of fixed size blocks assigned to processes</p>
New cards
46

microkernel architecture

assigns few essential functions to kernel

  • simple implementation

  • flexible

  • good for distributed environment

smaller than monolithic kernels

New cards
47

monolithic kernel

kernel where all components are in 1 address space. large and hard to design, but high performing

New cards
48

signal

mechanism to send message kernel→process

New cards
49

system call

mechanism to send message process→kernel

New cards
50

distributed operating system

provide illusion of

  • single main and secondary memory space

  • unified access facilities

New cards
51

object oriented OS

  • add modular extensions to small kernel

  • easy OS customizability

  • eases development of tools

New cards
52

5 process states

  • new

  • ready

  • running

  • blocked

  • exiting

New cards
53

blocked vs suspended

Blocked

  • waiting on event

  • can run once event happens

Suspended

  • able to run

  • instructed not to run

New cards
54

swapping

moving pages from memory to disk

  • happens when OS runs out of physical memory

New cards
55

dispatcher

small program that switches processor between processes

New cards
56

ready queue

queue that stores processes ready to run (waiting for CPU time)

New cards
57

event queue

queue that manages and processes asynchronous events (ex: timers, I/O)

New cards
58

virtual machine

dedicate 1 or more cores to a particular process and leave processor alone

New cards
59

preemption

suspending a running process to allow another process to run

New cards
60

process switch

7 step execution to switch processes

  • save processor context

  • update PCB

  • move PCB to appropriate queue

  • select new process

  • update PCB

  • update memory data structures

  • restore processor context

New cards
61

process image

process’s state at a given moment

  • user-level context

  • register context

  • system level context

New cards
62

process control block (PCB)

data needed by OS to control process

  • identifiers

  • user-visible registers

  • control and status register

  • scheduling

  • privileges

  • resources

  • memory management

New cards
63

role of PCB

  • contain info about process

  • read/modified by every module in OS

  • defines state of OS

hard to protect

New cards
64

User Running (process state)

Executing in user mode

New cards
65

Kernel Running (process state)

Executing in kernel mode

New cards
66

ready to run, in memory (process state)

ready to run as soon as the kernel schedules it

New cards
67

asleep in memory (process state)

unable to run until event occurs; process in main memory (blocked state)

New cards
68

ready to run, swapped (process state)

ready to run, but must be swapped into main memory

New cards
69

sleeping, swapped (process state)

process awaiting event and swapped into secondary storage (blocked state)

New cards
70

preempted (process state)

able to run, but instructed not to. Process returning from kernel mode to user mode, kernel does process switch to switch to other process

New cards
71

created/new (process state)

process newly created; not ready to run. Parent has signaled desire for child but child is not allocated space nor in main memory yet

New cards
72

zombie (process state)

process DNE, but leaves record for parent process to collect

New cards
73

I/O bound processes

processes that spend a significant amount of time waiting for I/O responses

New cards
74

CPU bound processes

processes that spend almost all of their time in CPU time

New cards
75

User vs Kernel mode implementation

user mode requests services from OS through system calls and interrupts

New cards
76

User vs Kernel mode reasoning

  • protection

  • security

  • isolation

  • flexibility

New cards
77

When Kernel mode is used

applications act in user mode, until they need special access through system calls and interrupts

New cards
78

process creation steps

  • assign PID

  • allocate space

  • initialize PCB

  • set linkages

  • create/expand other data structures

New cards
79

Trap

error generated by current process

known as exception/fault

New cards
80

when process switches occur

  • timeout

  • I/O

  • system calls

  • interrupts

New cards
81

User level thread

  • thread management done by application

  • kernel not aware of threads

New cards
82

Kernel level thread

thread management done by kernel

New cards
83

benefits of threads

threads share memory, are quicker, more efficient

New cards
84

5 components of a thread

  • execution state

  • thread context

  • execution stack

  • storage

  • memory/resource access

New cards
85

thread execution states

  • ready

  • running

  • blocked

New cards
86

thread operations

  • spawn

  • block

  • unblock

  • finish

New cards
87

ULT pros and cons

pros:

  • doesn’t require kernel mode

  • works on any OS

cons:

  • system calls block all threads of a process

  • cannot multiprocess

New cards
88

KLT pros and cons

pros:

  • can run multiple threads in parallel

  • can schedule new thread if thread is blocked

cons:

  • needs kernel mode

  • OS specific

New cards
89

ULT vs KLT applications

ULT: web servers, games, user level applications

KLT: network services, device drivers, background applications

New cards
90

user vs kernel mode

User: most applications run here, restricted access, safer

Kernel: unrestricted access, dangerous

New cards
91

Amdahl’s law

the idea that speedup has diminishing returns and does not scale linearly. Allows us to determine optimal number of processors

New cards
92

Linux tasks

  • single-threaded process

  • thread

  • kernel tasks

New cards
93

Linux namespaces

separate views that process can have of the system

  • helps create illusion that processes are the only process on a system

New cards
94

monitor

easier to control semaphore implemented at the PL level

New cards
95

synchronization

enforce mutual exclusion

achieved by condition variables

  • binary variables that flag suspension or resumption of a process

New cards
96

message passing

needs synchronization and communication

has send and receive

both sender and receiver can be blocked

New cards
97

addressing

schemes for specifying processes in send and receive

direct and indirect

New cards
98

readers/writers problem

data area shared among many processes

3 conditions

  • any number of readers

  • 1 writer

  • no reading when writer writing

New cards
99

race condition

when multiple threads/processes read and write data items; final result depends on order of execution

New cards
100

mutual exclusion

requirement that no other processes can be in a critical section when 1 process is accessing critical resources

New cards

Explore top notes

note Note
studied byStudied by 6 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 81 people
Updated ... ago
4.0 Stars(1)
note Note
studied byStudied by 34 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 3 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 10 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 5 people
Updated ... ago
5.0 Stars(2)
note Note
studied byStudied by 9 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 12 people
Updated ... ago
5.0 Stars(1)

Explore top flashcards

flashcards Flashcard160 terms
studied byStudied by 3 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard61 terms
studied byStudied by 3 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard146 terms
studied byStudied by 6 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard39 terms
studied byStudied by 22 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard42 terms
studied byStudied by 54 people
Updated ... ago
5.0 Stars(2)
flashcards Flashcard31 terms
studied byStudied by 9 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard103 terms
studied byStudied by 32 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard43 terms
studied byStudied by 41 people
Updated ... ago
5.0 Stars(1)