๐Ÿ“— -> 11/5: Memory


Lecture Slide Link

๐ŸŽค Vocab

โ— Unit and Larger Context

Small summary

โœ’๏ธ -> Scratch Notes

iClicker

What leads a neuron to fire, signaling that it has detected relevant input?
Neurons receive a mix of excitatory and inhibitory inputs, and they fire if the excitatory inputs win the competition and get them over threshold to fire

Which of the following is NOT true of neural networks?
Attractor dynamics prevent networks from settling into stable states. Think necker cube sim

Which of the following is NOT true of learning in neural networks?
Learning cannot solve problems that are harder than simple one-to-one mappings

Types of Memory

Lots and lots of types
Episodic, Semantic, Implicit/Explicit, Echoic

Episodic

  • Autobiographical memory (life events)
  • Arbitrary new memories (lab tasks)

โ€ฆ missed a lot, background on memory โ€ฆ

Pop Quiz

Same as the iClicker Qs

๐Ÿงช-> Example

Why does a basic neural network model show catastrophic interference?

It might show catastrophic interference if neurons are being involved in too many contexts, and their weights are adjusted away from their optimal level for another input. 

How might you try to adjust the model to reduce catastrophic interference?

You could reduce interference by increasing the size of the network, or by making the network 'deeper'. Both of these options would reduce the amount of overlap between inputs, and allow the network to be less vulnerable to interference.

Resources

  • Put useful links here

Connections