﻿ 人工智能考试代考 AI代写 强化学习代写 - 人工智能代写, 考试助攻

# 人工智能考试代考 AI代写 强化学习代写

2022-10-11 11:38 星期二 所属： 人工智能代写 浏览：69

## AI exam

### Problem 1 — Multiple Choice, True False, Short Answers (30 Points Total)

For each problem that provides an explanation, clearly justify your choice(s). There many be more than one correct answer, depending on how it is justified.

#### 1.(3 Points)

True/False: For a given Markov decision process, in order to extract the optimal policy π, it is sufficient to know the transition function T(s, a, s) and optimal value function V.

If false, explain why this is false. If true, explain how to extract the policy.

#### 2.(5 Points)

What are the benefits of approximating a Q-function as a linearcombination of features?

Circle all the following that apply:

(a) Lower memory requirements for large state-action spaces.

(b) Generalization to unseen states and actions.

(c) Convergence to the optimal Q-function is guaranteed.

Briefly explain each of your selections and why each choice you did not select does not apply.

#### 3.(4 Points)

Suppose you are using Q-learning to perform reinforcement learning and the current iteration has the following Q-values:

Suppose we observe a transition (0, r, +1, 1) (i.e., we begin in state 0, perform action r, obtain +1 reward, and end up in state 1), and further assume a discount factor of γ = 1 and learning rate α = 0.01. Describe what happens to each of the Q-values during Q-learning by marking whether each value will increase, decrease, or stay the same after the Q-value update for this observed transition:

• Q(0, r) will (increase) / (decrease) / (stay the same)
• Q(0, l) will (increase) / (decrease) / (stay the same)
• Q(1, r) will (increase) / (decrease) / (stay the same)
• Q(1, l) will (increase) / (decrease) / (stay the same)

Briefly explain your selections without calculating anything.

#### 4.(5 Points)

Clearly mark the statement(s) of conditional independence that are implied by the following Bayesian network (no explanation required):

#### 5.(5 Points)

A Naive Bayes classifier achieves perfect accuracy on the training data, but performs poorly (i.e., has low accuracy) whe evaluated on a dataset that was not used for training. Which of the following actions might you take in this situation?

(a) No action – the model is performing perfectly.

(b) Adjust the decision threshold of the model.

(c) Add more features to the model.

(d) Remove features from the model.

Briefly explain each action that you did or did not take. There are many valid choices here: clearly explain your reasoning.

#### 6.(8 Points)

Consider the following joint probability distribution and Bayesian network:

Give θ1 and θ2 so that the joint probability distribution matches the independence assumptions implied by the Bayesian network, or show that no such choice of parameters exists.

Hint: First, write down any equations that you know that can relate θ1 and θ2.

### Problem 2 — Reinforcement Learning (25 Points Total)   人工智能考试代考

Consider the following MDP, where each edge is labeled with its action (transition probabilities and rewards are unlabeled and will be learned from data):

#### 1.(3 Points)

Suppose you observe the following transitions, where s is the initial state, a is the observedaction, sis the state that was transitioned into, and r is the reward (assume here that rewards are not stochastic):

Give the learned reward and probability for each (s, a, s, r) tuple by filling in the following table:

#### 2.(12 Points)

Using the parameters learned in the previous step, perform 3 steps of value iteration to fill in the following table with a discount factor of γ = 0.9:

#### 3.(7 Points)

What is the optimal policy πaccording to the 3 steps of value iteration you just performed?

Show any calculations that you performed in order to determine it.

#### 4.(3 Points)

Suppose you can gather 5 more data points to learn more information about this MDP. The way you gather a new data point is to pick a particular state and action to perform (for instance, “perform action y in state A”). Which 5 more data points would you choose, and why? (There are many valid choices here – provide justification for any choice that you make.)

### Problem 3 — Bayesian Network Modeling (20 Points Total)    人工智能考试代考

Consider the following situation:

You are a baker trying to make the perfect bread-making recipe. Every day you bake a fresh loaf of bread. The quality of the bread each day depends on the humidity, the temperature, and the yeast quality. The bread quality can be great or poor. Each day you open a fresh independent packet of yeast, which can be good yeast  that is great for bread or dead yeast that does not work.

Each day can be high humidity or low humidity. It is more likely to be humid today if yesterday was humid. The temperature can be low, medium, or high – the temperature can affect the humidity, and the temperature also depends on the temperature of the previous day.

#### 1.(12 Points)

Draw a Bayesian network for modeling this situation over the course of three days (you are not given precise probabilities, so you should not provide theconditional probability tables for now). You can assume on the first day that no events depend on any previous days.

#### 2.(3 Points)   人工智能考试代考

Give a brief explanation of how each random variable your Bayesian network corresponds with the relevant component of the problem. What is the domain of each random variable?

#### 3.(5 Points)

Suppose you bake 3 bad loaves of bread in a row. You want to figure out the most likely underlying weather conditions that led to such bad luck. Describe the query one would perform on the Bayesian network to answer this question (i.e., give the exact mathematical formula in terms of the semantics of the Bayesian network). What is that query’s name (most-probable explanation, maximum a-posteriori, or marginal maximum a-posteriori)?

### Problem 4 — Classifying with Missing Data (35 Points Total)   人工智能考试代考

In this problem we will walk through the steps of training and evaluating a Naive Bayes classifier where some of the data points are missing! It is very common in a practical dataset in machine learning for some data points to missing particular attributes or features – for instance, in a medical diagnosis setting, patients may decline to tell you personal information if it is embarrassing.

Consider the following Naive Bayes classifier Bayesian network with three features (f1, f2, f3) and label y, and assume that all features and labels are binary:

We will now consider a few ways of dealing with missing data, and evaluate our classifier when trained with these different mitigations.

#### 1.(4 Points)

The first and simplest way is to ignore all the rows with missing attributes by removing them from the dataset. Give the learned conditional probability tables for the Naive Bayes classifier trained with this dataset.

#### 2.(10 Points)

Using a decision-threshold of 0.8 for predicting a positive label (i.e., predict + if Pr(y = + | f1, f2, f3) > 0.8), give the classification for the point f1= 1, f2= 0, f3 = 1.

#### 3.(3 Points)

What are some downsides to ignoring rows with missing data? Give at least 2 distinct reasons.

#### 4.(3 Points)   人工智能考试代考

One way of dealing with missing data is called imputation – a fancy way of saying “make something up”. One way of making up values is to choose the most likely instantiation for each missing value according to its marginal probability – for instance, if Pr(f1= 1) > 1/2 according to the empirical distribution given by the dataset, then replace all instances of “?” for f1in the dataset with the value 1 (ties can be broken arbitrarily, but you should replace all “?” with the same value). This strategy is called marginal imputation. Fill in the dataset below that is computed by marginal imputation (Note that you are using the marginal probabilities according to the dataset here, so you are not required to do inference on the Bayesian network):

Show any calculations that you performed to fill in this table.

#### 5.(3 Points)

Give the CPTs for the trained Naive Bayes classifier after applying marginal imputation to the dataset.

#### 6.(6 Points)

Another way of dealing with missing data is to make use of the Bayesian network structure itself. First, we can learn the maximum likelihood model according to the dataset where we ignore rows with missing data (as in sub-problem 1). Then, using this model, we can compute the most likely value for each missing attribute! We will call this Bayesian network imputation. Use this strategy to impute the missing “?” for the first row in the dataset (i.e., compute arg maxv∈{0,1}Pr(f3= v | f1 = 0, f2 = 1, y = +) for the Bayesian network you trained in the first part of this problem).

#### 7.(3 Points)

What advantages does Bayesian network imputation have over marginal imputation? Comment on why they might they learn different parameters.

#### 8.(3 Points)

In what ways is Bayesian network imputation still flawed (i.e., why is missing data still a problem)?