Dataview
LIST
FROM #Collection
WHERE file.name = this.Entry-For📗 -> Chapter 3: Networks
🎤 Vocab
Neocortex - “New cortex”, evolutionarily most recent
Distributed-Representation - Individual neural detectors working together to encode complex categories
Laminar Structure - A layer-wise structure of the cortex
❗ Information
3 Major Categories of Emergent Network Phenomena
- Categorization of diverse patterns of activity into relevant groups.
- Bidirectional Excitatory Dynamics: Produced by the bidirectional connectivity in the neocortex. (bottom-up and top-down, or feedforward and feedback)
The overall effects of bidirectional connectivity can be summarized in terms of an attractor dynamic or multiple constraint satisfaction, where the network can start off in a variety of different states of activity, and end up getting “sucked into” a common attractor state, representing a cleaned-up, stable interpretation of a noisy or ambiguous input pattern.
- Inhibitory Competition: Arises from specialized inhibitory interneurons
Inhibition gives rise to sparse distributed representations (having a relatively small percentage of neurons active at a time, e.g., 15% or so)
✒️ -> Scratch Notes
Biology of the Neocortex
Neocortex is split 85% excitatory and 15% inhibitory neurons.
- Without interneurons, system would overheat with excitation and lock up in epileptic seizures (as is seen by blocking GABA channels)
Layered Structures
6 Layers Structure
Layer 1: Axons?
Input areas of the cortex: These areas share an enlarged layer 4
Hidden areas: Called hidden because they don’t directly receive sensory input or drive motor output. Hidden layers. These areas have thicker superficial layers 2/3
Output areas: Have muscle control and have thicker deep layers 5/6
🧪-> Example
Big Picture: This is the thing I’m most interested in currently, learning about network dynamics, so I was very excited about the chapter. I found a lot of new ideas to explore, like attractor dynamics and rotation of the input space. Going forwards, I’ll definitely be following up on some of the leads I found.
Specifics: I liked the reference to current deep learning methods of rotating an input space along a learned new basis. I hope we have the chance to dive a bit more into how this is accomplished, and how this serves an advantage over managing it in the original input space.