๐Ÿ“— -> 03/06/25: ECS124-L15


[Lecture Slide Link]

๐ŸŽค Vocab

โ— Unit and Larger Context

Only at lecture for like 10 mins

โœ’๏ธ -> Scratch Notes

Hidden Markov Model (HMM)

given:
X = sequence of observations
= sequence of hidden states
A = = prob of going from state k to l

sequence: B C A A A T G C

  • B: BCA
  • P: AAT
  • B: GC
    prob of each characters is 25%: A,C,T,G
You can't use 'macro parameter character #' in math mode\displaylines{ \text{Model P} \\ A|P = .42 \\ T|P=.30 \\ G|P=.1 \\ C|P=.18 \\ }$$ P(B->B) = .85 P(B->P) = 15 P(P->B) = .25 P(P->P) = .75 GCA|AAT|GC (.25) (.85)(.25) (.85)(.25) (.15)(.42) (.75)(.42) (.15)(.3) (.25)(.25) (.85)(.25) - .85 is the probability to stay in B, as per the table above. $(.85)^3 \cdot (.25)^6 \cdot (.75)^2 \cdot (.42)^2 \cdot (.15)(.3)$ Questions we can answer: 1) One path: - Input: one path. $P(x,\pi)$ - x=emissions, pi=transition - Output: Prob of single path - Roughly $O(2n) \to O(n)$ 2) Viterbi Algorithm - Input: $\pi$ (the transition function), and x: a observation - Output: The most likely path - A DP algorithm that is just like alignment HMM: Given model of all possible paths probability - Given a set of emmisions ## ๐Ÿงช -> Refresh the Info > Did you generally find the overall content understandable or compelling or relevant or not, and why, or which aspects of the reading were most novel or challenging for you and which aspects were most familiar or straightforward?) ``` ``` > Did a specific aspect of the reading raise questions for you or relate to other ideas and findings youโ€™ve encountered, or are there other related issues you wish had been covered?) ``` ``` ## ๐Ÿ”— -> Links ### Resources - Put useful links here ### Connections - Link all related words