π -> 10/15: Learning
π€ Vocab
β Unit and Larger Context
How Do We Simulate Neurons?
- Neural activity can be characterized by mathematical equations. (Learning too!)
- We can use these equations to specify the behavior of artificial neurons.
- These artificial neurons can then be put together to explore behaviors of networks of neurons.
We can think about internal monologues being guided by higher level thoughts / goals. Having a strong anxiety about an interview might cause you to think about it, despite there not being any stimulus pertaining to it. It might accomplish this by higher level goal thoughts having feedback towards middle level train of thought thoughts.
Firing:
- Pre-synaptic cell has to have its membrane potential elevated
- Mg2+ unblocks from NMDA
- For NMDA to let Ca2+ through it needs two things:
- Mg2+ needs to unblock opening
- Glutamate has to activate it to let calcium through
- For NMDA to let Ca2+ through it needs two things:
- Pre-synaptic cell fires, releasing neurotransmatters (glutamate)
- Calcium enters through NMDA, Sodium (Na+) enters through AMPA
- There is an increase in calcium (Ca2+) concentration in the post synaptic neuron
Computational Modeling of Synaptic Learning
Biologically, learning is done based on calcium influx. Too little results in LTD, too much results in LTP
Computationally:
XCAL = Linearized BCM
Bienenstock, Cooper & Munro (1982) β BCM:
Has an adaptive threshold
- Lower when less active
- Higher when more active (homeostatic)
This is done in the brain as well, threshold adapts in the brain
Self Organizing Learning
The brain has a lot of mechanisms helping it to organize:
Inhibitory Competition:
- Only some get to learn
Positive feedback loops from Hebbian learning
- Rich get richer: winners detect even better
- But also get more selective
Homeostatic negative feedback loop
- keeps things evenly distributed across activity space
Model that self-organizes as function of random weights and experiences to encode regularities in input
Simple lines model, but could correspond to any regularities in neural firing (e.g. social) 21
βοΈ -> Scratch Notes
Biological
- Synaptic plasticity
Computational: - Self-organizing β soaking up regularities in the world
- Error-driven β focused on getting the right answers
Math - Captures βNeurons that fire together, wire togetherβ
- (plus thresholds, phases)
Consider the self-organizing learning model we covered in class today. It focuses on this kind of model to explain the processes involved in a different real-world learning perceptual learning. How could you extend situation? What would be the inputs be? What kind of regularities could the model pick up on?
An example of having regularities that we could pick on and be reinforced on could be features being associated with different locations in space. The model could pick up on the regularities of the locations as they are relatively immutable, and learn the things associated with each place.
The inputs could be the street / address that each location is associated with, and the regularities that the model could pick up on are the things at each location, like people, stores, and diners.
π§ͺ-> Example
- List examples of where entry contents can fit in a larger context
π -> Links
Resources
- Put useful links here