Filling in for final cram

Caching

Transclude of ECS154A-L22-2025-03-19-14.39.05.excalidraw

Use part of address to steer search, don’t have to track things as much

Direct Mapped

Use its address to determine where to look for something. Data will be in its spot, or not. To evaluate this, uses its tag.
Spot determined by index bits.

Set Associative

Uses address to determine what set to put it in.
Within a set is fully associative, purely looks for its matching tag.

Functional Differences

Direct mapped used closer to CPU because its faster.

  • These used more for virtual memory?
    Hybrid ones are a little father because they’re slower.

Transclude of ECS154A-L22-2025-03-19-15.07.04.excalidraw

12 entry FA vs 12 byte 3-way associative SA

  • Both have 12 entries, but:
    • FA has to do 12 parallel comparisons
    • SA only has to do 3

Direct Mapped Cache: 4 entry, linesize=8, (8 bit address)

  • Will have 4 rows on table
  • Will have 8 different pieces of data in each row
  • For an 8 bit address:
    • 0x10 = TTT|EE|OOO
    • turns into
      • 3 offset bits, because theres 8 pieces of data in each row
      • 2 different entry bits, because theres 4 entries
      • Rest are tag bits (and there’s 3 left)
  • 32 bytes of data stored
    • 8 bytes of data
    • 4 lines of bytes

Fully Associative: 17 entries, linesize=2, (8 bit address)

  • 8 bit address: TTTT TTT|O
    • 1 offset bit (linesize of 2)
    • Rest are tag

DM: 8 entries, ls=4

  • TTTT TEOO
  • If there’s 4 bytes in each of the lines, how many lines can there be in an 8 entry cache?
    • 8/4 = 2
    • Need 1 bit to represent the 2 lines

2-way Set Associative Cache: LS=8 byte, 128 entries:

  • TTEE EOOO
  • 3 offset obviously (log of LS)
  • 2-way sets, 8 bytes in each line => 16 bytes per set!
  • 128 entries/16 bytes => 8 sets
  • 8 sets needs 3 set idx bits