๐ -> 10/23/25: ECS171-L7
[Lecture Slide Link]
๐ค Vocab
โ Unit and Larger Context
Small summary
โ๏ธ -> Scratch Notes
Autoencoders
- Compression - reduce dimensionality. Input is original dimension, output is reduced dimensions
- Denoising - Input is noisy version, output is unnoised version. Can introduce noise intentionally to train this
- Capturing contextual information - IE in image: pixels next to each other, books: position of words
Default autoencoders are not generative, trained to encode and decode with as few loss as possible, no matter how space is organize. Does not impose structure
Solution: Variational AEs. A VAE is defined as being an AE whose training is regularised to avoid overfitting and ensure that the latent space has good properties that enable generative process.
To make generative process possible, an AE needs 2 properties:
- Continuity (close points should given similar contents once decoded)
- Completeness (point sampled from latent space should give a meaningful content once decoded)
KL Divergence: Assess how similar two different distribution are.
๐งช -> Refresh the Info
Did you generally find the overall content understandable or compelling or relevant or not, and why, or which aspects of the content were most novel or challenging for you and which aspects were most familiar or straightforward?)
Did a specific aspect of the content raise questions for you or relate to other ideas and findings youโve encountered, or are there other related issues you wish had been covered?)
๐ -> Links
Resources
- Put useful links here
Connections
- Link all related words