Did not attend this lecture
Filling in notes way after the fact
✒️ -> Scratch Notes
SVM Hyperplane
The hyperplane
- Also called decision boundary
Many possible hyperplanes to choose, which is best?
Maximal Marge Hyperplane
SVM will look for the hyperplane with the largest margin to the training data
- Margin being the separation between the margin and the data point (calculated by dot product)
Distance from hyperplane. “Distance from pointto the hyperplane ”:
- Numerator is a dot product, offset by a bias
- Scaled by the denominator, to normalize
SVM Objective Function
then combine the loss with the margin terms:
- The
is a regularization term
🧪 -> Refresh the Info
Did you generally find the overall content understandable or compelling or relevant or not, and why, or which aspects of the reading were most novel or challenging for you and which aspects were most familiar or straightforward?)
Did a specific aspect of the reading raise questions for you or relate to other ideas and findings you’ve encountered, or are there other related issues you wish had been covered?)
🔗 -> Links
Resources
- Put useful links here
Connections
- Link all related words