homework | 代做report | Network代写 | network代写 | 代做Python – Introduction to Graphical Models

Introduction to Graphical Models

homework | 代做report | Network代写 | network代写 | 代做Python – 该题目是一个常规的Network的练习题目代写, 包括了report/Network/network/Python等方面, 这是值得参考的homework代写的题目

python代写 代写python

Last Modified: April 20, 2021

CS 179: Introduction to Graphical Models: Spring 2021

homework 3

The submission for this homework should be a single PDF file containing all of the relevant code, figures, and any text explaining your results. When coding your answers, try to write functions to encapsulate and reuse code, instead of copying and pasting the same code multiple times. This will not only reduce your programming efforts, but also make it easier for us to understand and give credit for your work. Show and explain the reasoning behind your work!

Problem 1 (adapted from R. Dechter) (40 points)

Consider the Bayesian network shown below. (a) (8 points)Draw the Markov random field (undirected graph) corresponding to this model and factorization. (b) (8 points)Compute the marginal probability p ( F )by eliminating variables in the order[ A , B , C , D , E , G ] (here meaning, A first, then B , then C , etc.) report which functions are combined at each step (e.g., in each bucket of bucket elimination), and show the scope (arguments of) and table corresponding to the function produced ( i ). What is the induced width of this order (the largest scope of a function produced during the computation)? Note: you may check your answer easily using thepyGMs GraphModel.eliminatefunction, but please compute the intermediate functions manually. (c) (8 points)Without actually computing the tables, what is the induced width of the ordering[ D , G , E , F , B , C , A ]? ( D first, then G , etc.) (d) (8 points)Suppose that we observe evidence D = 0 , C =1. First, simplify the graphical model by assigning the values of these variables (this will remove those nodes from the graph); what is the new Markov random field (undirected graph) associated with this conditional model? (e) (8 points)Now, compute the probability of evidence p ( D = 0 , C = 1 )by eliminating (summing out) the other variables. Select a good order and compute the intermediate functions manually as in part (b); you do not need to output the tables themselves for this part, just the scopes and the final probability. What was the induced width of the order you selected for this problem? Again, you can check your result easily using the pyGMs conditionandeliminatefunctions; see lecture and slides for examples. A

B C
D E
F G
A p(A)
0 0.
1 0.
B p(B)
0 0.
1 0.
D p(D)
0 0.
1 0.
A C p(C|A)
00 01 0.150.
11 01 0.250.
E G p(G|E)
00 01 0.100.
11 01 0.300.
B C E p(E|B,C)
00 00 01 0.400.
00 11 01 0.450.
11 00 01 0.600.
11 11 01 0.300.
D E F p(F|D,E)
00 00 01 0.250.
00 11 01 0.600.
11 00 01 0.100.
11 11 01 0.200.

Homework 3 UC Irvine 1 / 2

CS 179: Introduction to Graphical Models Spring 2021

Problem 2: Hidden Markov Model (40 points)

Consider a four-state 4-state finite state machine, with states Yt { 0 , 1 , 2 , 3 }, with state transition diagram shown at right. We always start (at time t =0) in state Y 0 =0. In this problem, we will compute marginal probabilities and posterior marginal probabilities.

0
1
2
3
.
.
1.
0.3 0.
0.3 0.
0.
(a) (8 points)Create (as a numpy array) the transition matrix T associated with this
state transition diagram. For consistency in solutions, please make the first axis
correspond to the current time t , and the second axis to the next time ( t +1),
i.e.,T[a,b]= p ( Yt + 1 = b | Yt = a ). Print the array T. To verify your answer, also
calculate by hand and using T the values of
(1) p ( Y 2 = 0 | Y 0 = 0 )
(2) p ( Y 2 = 2 | Y 0 = 0 )
(note: at time 2), and verify that the two are the same.
(b) (8 points)Our observations Xt will be deterministic given the current state Yt ,
with Xt =0 when Yt  { 0 , 2 }and Xt =1 when Yt  { 1 , 3 }, i.e., Xt indicates
whether we are in an odd-numbered state. Create (as a numpy array) the
observation matrix O , withO[a,c]= p ( Xt = c | Yt = a ). Print the array O.
(c) (16 points)Compute the following probability distributions, either by hand or
using your arrays:
(1) p ( Y 1 )
(2) p ( Y 1 | X 1 = 0 )
(3) p ( Y 2 )
(4) p ( Y 2 | X 1 =0, X 2 = 1 )
(5) p ( Y 2 | X 1 =0, X 2 = 0 )
(6) p ( Y 3 | X 1 =0, X 2 =0, X 3 = 1 )
(d) (8 points)What is the distribution of states far in the future, e.g., p ( Y 100 )?

Problem 3 (Python): (20 points)

In this problem, we will use the same model as Problem 2, but compute the most likely state sequence , i.e., the (arg) maximum over the Y variables, in Python. Write a Python function or script that takes in the transition probability matrix T , where T [ i , j ]is the probability of transitioning from state i to state j ; an observation probability matrix O (where O [ i , k ]is the probability of observing value k when in state i ), an initial state distribution vector p 0, and an observation sequence O =[ O 1 ,… , OT ], where Ot { 0 ,.. .}}, and computes the most likely state sequence S =[ S 1 ,… , ST ]given O using dynamic programming. You may use eithernumpyarrays orpyGMsfactors to represent the messages. You can use the following test cases to help debug: arg max Sp ( Y | X =[ 001 ]) [0, 2, 3] arg max Sp ( Y | X =[ 0100 ]) [0, 1, 2, 2] arg max Sp ( Y | X =[ 0000 ]) [0, 2, 2, 2] arg max Sp ( Y | X =[ 010001001 ]) [0, 1, 2, 2, 0, 1, 2, 0, 1] arg max Sp ( Y | X =[ 0011100 ]) [0, 2, 3, 3, 3, 2, 2] For your solution, provide a listing of your function, and your predictions on a few more test cases: (a) arg max Sp ( Y | X =[ 0001 ]) ? (b) arg max Sp ( Y | X =[ 00011 ]) ? (c) arg max Sp ( Y | X =[ 00010 ]) ? (d) arg max Sp ( Y | X =[ 000100 ]) ?

Homework 3 UC Irvine 2 / 2