Notifications

Chapter 3 (Hardware)

0.0(0) Reviews
Duplicate
Report Flashcard set

Spaced Repetition

Scientifically backed study method

spaced repetition

Flashcards

Review terms and definitions

flashcards

Learn

Study with MC, T/F, and other questions

learn

Practice Test

Take a test on your terms and definitions

exam

Tags

41 Terms
😃 Not studied yet (41)
abstraction
A method used to hide unneeded or complex details of a problem or idea Found in lecture Chapter 3 - Hardware 1
von Neumann Architecture
Computer architecture with four subsystems:1) Memory2) Input/Output Devices (I/O)3) ALU4) ControlTwo characteristics:1) Instructions are stored like data2) Instructions are executed sequentially Found in lecture Chapter 3 - Hardware 2
RAM
Random Access Memory.Memory is divided into cells. Each cell (aka minimum unit of access) is associated with an address. A standard cell width is 8 bits (1 byte). Must always access ENTIRE cell, even if you just want 1 bit. Found in lecture Chapter 3 - Hardware 1 and zyBook Section 3.2
ROM
Read Only Memory. ROM holds important system instructions and data such as the START UP instructions for a computer (aka firmware). Usually info has been prerecorded during manufacture. Found in lecture Chapter 3 - Hardware 1
volatile memory
Information disappears without power, e.g., RAM and cache (lose data on power loss) Found in lecture Chapter 3 - Hardware 1 and zyBook Section 3.3
non-volatile memory
Information preserved without power, e.g., ROM, disc, flash Found in lecture Chapter 3 - Hardware 1 and zyBook Section 3.3
cache
Stores DATA from a slower device to be accessed faster. On-chip memory used to decrease memory access times. Works because of locality. - SMALL (on order of few megabytes) because it's expensive. - FAST (5-10x faster than RAM) We use cache to decrease memory access time. Found in lecture Chapter 3 - Hardware 2 and zyBook Section 3.3
cache hit
When requested data is found in cache Found in lecture Chapter 3 - Hardware 2
cache hit rate/ratio
Probability a data request will be a cache hit. In other words, the % of time that the info needed is in cache memory. INCREASE cache hit ratio -> DECREASE overall memory access time. - Good to know: Cache hit ratio CHANGES as system executes: - Decreases as we move from one locality to another. - Increases as we stay in the same locality (if cache is big enough to hold locality). Found in lecture Chapter 3 - Hardware 2
cache miss
When requested data is not found in cache Found in lecture Chapter 3 - Hardware 2
What can we do to increase cache hit ratio?
- Increase size of cache (can store more, small cache size can yield 75% hit ratio). - Be smarter about what to put in the cache (Studies show 98% of process time is spent in localities. If OS can be smarter about what to put in cache -> increase hit ratio) Found in lecture Chapter 3 - Hardware 2
memory cell
Fixed sized unit of memory. Standard size = 8 bits = 1 byte
address
Unique identifier for a memory cell. Address space = MAX # of possible addresses. Example question: Suppose RAM addresses are N bits. How many memory cells can theoretically exist?2^N. If 16 bit ADDRESS, how many cells could exist in memory? 65,536 (64K). Found in lecture Chapter 3 - Hardware 1
register
Stores VALUES that are BEING USED during processing. Special high speed storage cell that holds something specific. Note: registers are not designed for general purposes like other types of memory (e.g., cache, ram) Found in lecture Chapter 3 - Hardware 3
ALU
Arithmetic Logic Unit. Performs math and logical operations (e.g., addition, subtraction, comparisons, and, or, not) Found in lecture Chapter 3 - Hardware 2 and zyBook Section 3.2
control unit
Component of processor that FETCES, DECODES, and EXECUTES instructions. Found in lecture Chapter 3 - Hardware 2 and zyBook Section 3.2
processor
ALU + control unit Found in lecture Chapter 3 - Hardware 2 and zyBook Section 3.2
interrupt
An input signal to the processor indicating an event that needs immediate attention. Alerts the processor and serves as a request for the processor to interrupt the currently executing code, so that the event can be processed in a timely manner. Found in lecture Chapter 3 - Hardware 3
Moore's Law
Since the invention of the IC (integrated circuit) around 1960, IC's have doubled in circuit capacity about every 1.5 or 2 years. Found in lecture Chapter 3 - Hardware 2 and zyBook Section 3.7
truth table
One way to describe the behavior of a combinational circuit, listing the output value for every possible combination of input values. Found in lecture Chapter 3 - Circuits 1 and zyBook Section 3.1
transistor
a device with no mechanical/moving parts that can be OFF or ON Found in lecture Chapter 3 - Circuits 2 and zyBook Section 3.7
tautology
A Boolean expression that is always true. Example: (A OR B) OR (NOT A) Found in lecture Chapter 3 - Circuits 1
circuit
a collection of logic gates that transforms a set of binary inputs into a set of binary outputs. Found in lecture Chapter 3 - Circuits 1 and zyBook Section 3.1
gate
an electronic device with 1+ transistors that operates on a set of binary inputs to produce a binary output. Found in lecture Chapter 3 - Circuits 1 and zyBook Section 3.1
boolean expression
an expression that evaluates to either true or false Found in lecture Chapter 3 - Circuits 1
boolean logic
An area of mathematics dealing with rules for manipulating true and false Found in lecture Chapter 3 - Circuits 1
computer
a device that: 1) Takes input. 2) Processes the input in some way. 3) Produces output (a result). Found in lecture Chapter 3 - Hardware 1
von Neumann Bottleneck
- As CPU speeds have increased, CPU idle time has increased (due to waiting for data to be fetched from memory). - Needs significant changes to computer architecture to handle larger problems. Found in lecture Chapter 3 - Hardware 2
Two key issues with von Neumann architectures and how we might solve them
TWO ISSUES: 1. Inability to place circuits closer together on a chip. Moore's Law is slowing down! Slow down in processor speed while our problems to be solve are getting larger!-> Solution: Parallel Processing (multi-processor). "If you can't build something to work twice as fast, build it to do two things at once. The results will be identical". 2. Slow speed to access memory.-> Solution: adding a cache helps. Adding multi-threading helps. Adding new types of RAM (e.g., DDR SDRAM) helps. But NONE solve the full problem. No matter how fast a given CPU can work, it is limited by the rate of memory transfer. Found in lecture Chapter 3 - Hardware 2
Memory vs. Disc
Discs are SLOW (order of magnitude slower than memory). Solid state disc (SSD) are more common for discs now: - no moving part -> faster - more expensive $$$ Why not use SSD as RAM? - SSD are starting to get as fast as RAM, but SSDs wear out (would wear out fast if accessed it OFTEN, like we do with RAM). - SSDs optimized to act as a disc. RAM optimized to act as memory. Found in lecture Chapter 3 - Hardware 2
bootloader
Transfers OS from a pre-determined location into RAM. Jump to OS for system run. Found in lecture Chapter 3 - Hardware 2
The 4 basic instruction types
Data transfer: moves data to/from memory and I/O devices. Arithmetic: Calculate numerical operations. (e.g., add) Comparison: Compare 2 values (e.g., ==, <, >, !=) and set flag appropriately. Branch/Jump: control execution flow of program (e.g., if, loops). Changes value in program counter (PC) register. Found in lecture Chapter 3 - Hardware 3
instruction set
Set of all operations that can be executed by a processor. Set of all implemented opcodes. Found in lecture Chapter 3 - Hardware 3
address fields
Second part of machine instruction. Specifies which memory addresses are being operated on. (up to 3) Found in lecture Chapter 3 - Hardware 3
Operation code (opcode)
Opcode. First part of machine language instruction. Specifies which operation needs to be carried out (e.g., addition, store) - If 2-bit OPCODE, how many different instructions are possible? Four (01 Input, 11 Add, 10 Output, 00 Stop). - If 8 bit OPCODE? 2^N = 2^8 = 256 different instructions are possible. Found in lecture Chapter 3 - Hardware 3
machine language
Binary representation of program instructions. Each machine language instruction has 2 types of fields: 1. an opcode (specifies which operation needs to be carried out). 2. address fields (specifies what memory addresses are being operated on). Found in lecture Chapter 3 - Hardware 3
execute
Control unit action: Execute instruction by issuing appropriate command Found in lecture Chapter 3 - Hardware 3
decode
Control unit action: use OPCODE to determine what to do for an instruction. Found in lecture Chapter 3 - Hardware 3
fetch
Control unit action: fetch the next instruction from memory at address stored in PC (program counter). Fetch gets value at an address (nondestructive fetch) - Value remains in memory - Load MAR, Decode MAR (fetch address; place in MDR) - return value in MDR Found in lecture Chapter 3 - Hardware 3
MIPS
millions of instructions per second Found in lecture Chapter 3 - Hardware 3
GIPS
billions of instructions per second Found in lecture Chapter 3 - Hardware 3