๐Ÿ“— -> 10/23/25: ECS171-L7


[Lecture Slide Link]

๐ŸŽค Vocab

โ— Unit and Larger Context

Small summary

โœ’๏ธ -> Scratch Notes

Autoencoders

  1. Compression - reduce dimensionality. Input is original dimension, output is reduced dimensions
  2. Denoising - Input is noisy version, output is unnoised version. Can introduce noise intentionally to train this
  3. Capturing contextual information - IE in image: pixels next to each other, books: position of words

Default autoencoders are not generative, trained to encode and decode with as few loss as possible, no matter how space is organize. Does not impose structure
Solution: Variational AEs. A VAE is defined as being an AE whose training is regularised to avoid overfitting and ensure that the latent space has good properties that enable generative process.

To make generative process possible, an AE needs 2 properties:

  1. Continuity (close points should given similar contents once decoded)
  2. Completeness (point sampled from latent space should give a meaningful content once decoded)

KL Divergence: Assess how similar two different distribution are.

๐Ÿงช -> Refresh the Info

Did you generally find the overall content understandable or compelling or relevant or not, and why, or which aspects of the content were most novel or challenging for you and which aspects were most familiar or straightforward?)

Did a specific aspect of the content raise questions for you or relate to other ideas and findings youโ€™ve encountered, or are there other related issues you wish had been covered?)

Resources

  • Put useful links here

Connections

  • Link all related words