๐ -> 11/5: Memory
๐ค Vocab
โ Unit and Larger Context
Small summary
โ๏ธ -> Scratch Notes
iClicker
What leads a neuron to fire, signaling that it has detected relevant input?
Neurons receive a mix of excitatory and inhibitory inputs, and they fire if the excitatory inputs win the competition and get them over threshold to fire
Which of the following is NOT true of neural networks?
Attractor dynamics prevent networks from settling into stable states. Think necker cube sim
Which of the following is NOT true of learning in neural networks?
Learning cannot solve problems that are harder than simple one-to-one mappings
Types of Memory
Lots and lots of types
Episodic, Semantic, Implicit/Explicit, Echoic
Episodic
- Autobiographical memory (life events)
- Arbitrary new memories (lab tasks)
โฆ missed a lot, background on memory โฆ
Pop Quiz
Same as the iClicker Qs
๐งช-> Example
Why does a basic neural network model show catastrophic interference?
It might show catastrophic interference if neurons are being involved in too many contexts, and their weights are adjusted away from their optimal level for another input.
How might you try to adjust the model to reduce catastrophic interference?
You could reduce interference by increasing the size of the network, or by making the network 'deeper'. Both of these options would reduce the amount of overlap between inputs, and allow the network to be less vulnerable to interference.
๐ -> Links
Resources
- Put useful links here