### 代写机器学习Machine Learning Theory Python实现：Machine Learning Theory (CSC 482A/581A)

**Machine Learning Theory (CSC 482A/581A)**

**Problem Set 1** **Due on Friday, February 2nd, 5pm**

**Instructions:**

• You must write up your solutions individually.

• You may have high-level discussions with up to 2 other students registered in the course. If you discuss problems with others, include at the top of your submission: their names, V#’s, and the problems discussed.

• Your solutions should be clear and concise (and legible!). You are strongly encouraged to type up your solutions in LaTeX. For any problems where you only have a partial solution, be clear about any parts of your solution for which you have low confidence.

• If submitted through conneX, the due date is Friday, February 2nd, 7pm. This is a hard deadline. If submitted in writing, submit at the beginning of class (12:30pm) Friday February 2nd. You are thus incentivized to submit online (using LaTeX) as you get more time!

**Questions:**

1. Let X = R^{2} and take C to be the class of concentric circles C = {*c** _{r}* :

*r*≥ 0}, where for each nonnegative real number

*r*≥ 0, we have

*c*

*(*

_{r}*x*) =

**1**[k

*x*k

_{2}≤

*r*]. Prove that C is PAC learnable.

In particular, show a PAC learning algorithm which, given a training sample of size | log | 1 | , |

| |||

finds with probability at least 1 − |

2. Devise an eﬃcient mistake bound learner for the concept class *k*-term DNF over X = {0*,* 1}* ^{d}*. The runtime and mistake bound of your algorithm both should be polynomial in

*d*; you may treat

*k*as a constant.

3. Let X = {0*,* 1}* ^{d}* and consider PAC learning a finite concept class C. Assume that the inputs are drawn i.i.d. from an unknown distribution

*P*over X , and the labels are generated via the rule

*Y*=

*c*(

*X*) for some

*c*∈ C.

Let’s call this problem the “clean” problem; so, in the clean problem, the training sample consists of random examples of the form (*X, Y* ) for *X P* and *Y* = *c*(*X*).

Next, consider the following “corrupted” problem: Each time we request a random example (*X, Y* ), with probability *α*(*X*) ∈ [0*,* 1] the value of the label *Y* is flipped. Call the resulting label *Y*e. Thus,

(

−*Y* with probability *α*(*X*)

*Y*e* *=

Y with probability 1 − *α*(*X*)

In the corrupted problem, the examples are of the form (*X, Y*e), and so the labels are noisy.

(a) Using *c* and *α*, derive an expression for the Bayes classifier for the corrupted problem.

(b) For the remaining questions, assume that *α*(*x*) = ^{1}_{4} for all *x* ∈ X . What is the Bayes classifier for the corrupted problem?

(c) What is the Bayes risk for the corrupted problem?

(d) Let *c** _{ε}* ∈ C be a hypothesis for which Pr(

*c*

*(*

_{ε}*X*) 6=

*c*(

*X*)) =

*ε >*0. What is the risk (expected zero-one loss) of

*c*

*for the corrupted problem?*

_{ε}

(e) Design an algorithm for PAC learning C given access only to corrupted labeled examples (*X*_{1}*, Y*e_{1})*, . . . ,* (*X*_{n}*, Y*e* _{n}*). That is, your algorithm should, with probability at least 1 −

*δ*, output a concept

*f*

^{ˆ}∈ C for which E

*[*

_{X P}*f*

^{ˆ}(

*X*) 6=

*c*(

*X*)] ≤

*ε*. Your algorithm should be statistically eﬃcient (you should mention the sample size

*n*required, and

*n*should be polynomial in

^{1}

*and*

_{ε}^{1}

*), but it need not be computationally eﬃcient. Please explain why your algorithm is correct.*

_{δ}**代写计算机编程类/金融/高数/论文/英文**

本网站支持淘宝 支付宝 微信支付 paypal等等交易。如果不放心可以用淘宝或者Upwork交易！

E-mail:850190831@qq.com 微信：BadGeniuscs 工作时间：无休息工作日-早上8点到凌晨3点

如果您用的手机请先保存二维码到手机里面，识别图中二维码。如果用电脑，直接掏出手机果断扫描。

英国留学代写

Python代写

2018-02-03

Problem Set 1 Due on Friday, February 2nd, 5pm Instructions: • You must write up your solutions individually. • You may have high-level discussions with up to 2 other students registered in the course. If you discuss pr