当前位置:天才代写 > 作业代写,留学生作业代写-北美、澳洲、英国等靠谱代写 > SMOTE算法代写 datase代写 data imputation technique代写 matrix代写

SMOTE算法代写 datase代写 data imputation technique代写 matrix代写

2020-11-04 17:35 星期三 所属: 作业代写,留学生作业代写-北美、澳洲、英国等靠谱代写 浏览:657

homework4代写

Homework4

SMOTE算法代写 Disclaimer: This set of homework applies SMOTE to a seriously imbalanced dataset with a large number of features and data points.

Disclaimer: This set of homework applies SMOTE to a seriously imbalanced dataset with a large number of features and data points. SMOTE is essentially a time consuming method. You need to start doing this homework early, so that you have enough time to run SMOTE on the full dataset.

1. The LASSO and Boosting for Regression SMOTE 算法代写

(a)Downloadthe Communities and Crime data1 from https://archive.ics.uci. edu/ml/datasets/Communities+and+Crime. Use the first 1495 rows of data as the training set and the rest as the test set.

(b)The data set has missing values. Use a data imputation technique to deal with themissing values in the data  The data description mentions some features are nonpredictive. Ignore those features.

(c)Plota correlation matrix for the features in the data

(d)Calculate the Coefficient of Variation CV for each feature, where CV= s , in which s is sample variance and m is sample mean..

(e)Pick 128 features with highest CV, and make scatter plots and box plots for them. Can you draw conclusions about significance of those features, just by the scatter plots?SMOTE算法代写

(f)Fita linear model using least squares to the training set and report the test error.

(g)Fita ridge regression model on the training set, with λ chosen by cross-validation. Report the test error obtained.

(h)Fita LASSO model on the training set, with λ chosen by cross-validation. Report the test error obtained, along with a list of the variables selected by the model. Repeat with standardized2  Report the test error for both cases and compare them.

(i)Fit a PCR model on the training set,

with M (the number of principal compo- nents) chosen by cross-validation. Report the test error

(j)Inthis section, we would like to fit a boosting tree to the  As in classification trees, one can use any type of regression at each node to build a multivariate regression tree. Because the number of variables is large in this problem, one can use L1-penalized regression at each node. Such a tree is called L1 penalized gradient boosting tree. You can use XGBoost3 to fit the model tree. Determine α (the regularization term) using cross-validation.SMOTE算法代写

1Question you may encounter: I tried opening the dataset and download it but the file is not readable.How to download the file? Just change .data to .csv. .

2In this data set, features are already normalized.

3Some hints on installing XGBoost on Windows: http://www.picnet.com.au/blogs/guido/2016/09/ 22/xgboost-windows-x64-binaries-for-download/.

SMOTE算法代写
SMOTE算法代写

2. Tree-Based Methods SMOTE算法代写

(a)Download the APS Failure data from: https://archive.ics.uci.edu/ml/datasets/ APS+Failure+at+Scania+Trucks. The dataset contains a training set and a test

set. The training set contains 60,000 rows, of which 1,000 belong to the positive class and 171 columns, of which one is the class column. All attributes are nu- meric.

(b)DataPreparation

This data set has missing values. When the number of data with missing values is significant, discarding them is not a good idea. 4

i.Researchwhat types of techniques are usually used for dealing with data with missing values.5 Pick at least one of them and apply it to this data in the next 6

ii.For each of the 170 features, calculate the coefficient of variation CV =s ,where s is sample variance and m is sample mean.SMOTE算法代写

iii.Plot a correlation matrix for your features using pandas or any other tool.

iv.Pick 170 features with highest CV , and make scatter plots and boxplotsfor them, similar to those on p. 129 of ISLR. Can you draw conclusions about significance of those features, just by the scatter plots? This does not mean that you will only use those features in the following questions. We picked them only for visualization.

v.Determinethe number of positive and negative  Is this data set imbal- anced?

(c)Train a random forest to classify the data set.SMOTE算法代写

Do NOT compensate for class imbalance in the data set. Calculate the confusion matrix, ROC, AUC, and misclassificationfor training and test sets and report them (You may use pROC package). Calculate Out of Bag error estimate for your random forset and compare it to the test

(d)Research how class imbalance is addressed in random forests. Compensatefor class imbalance in your random forest and repeat  Compare the results with those of 2c.

(e)ModelTrees SMOTE算法代写

In the case of a univariate tree, only one input dimension is used at a tree split. In a multivariate tree, or model tree, at a decision node all input dimensions can be used and thus it is more general. In univariate classification trees, majority polling is used at each node to determine the split of that node as the decision rule. In model trees, a (linear) model that relies on all of the variables is used

4In reality, wehn we have a model and we want to fill in missing values, we do not have access to training data, so we  only use the statistics of test data to fill in the missing values.  For  simplicity,  in this exercise,   you first fill in the missing values and then split your data to training and test sets.

5They are called data imputation techniques.

6You are welcome to test more than one method.

to determine the split of that node (i.e. instead of using Xj > s as the decision rule, one has j βjXj > s. as the decision rule). Alternatively, in a regression tree, instead of using average in the region associated with each node, a linear regression model is used to determine the value associated with that node.

One of the methods that can be used at each node is Logistic Regression. One can use scikit learn to call Weka7 to train Logistic Model Trees for classification. Train Logistic Model Trees for the APS data set without compensation for class imbalance. Use one of 5 fold, 10 fold, and leave-one-out cross validation methods to estimate the error of your trained model and compare it with the test error. Report the Confusion Matrix, ROC, and AUC for training and test sets.SMOTE算法代写

(f)UseSMOTE (Synthetic Minority Over-sampling Technique) to pre-process your data to compensate for class imbalance.8 Train a Logistic Model Tree using the pre-processed data and repeat  Do not forget that there is a right and a wrong way of cross validation here. Compare the uncompensated case with SMOTE.

  1. ISLR 6.8.3
  2. ISLR, 6.8.5
  3. ISLR8.4.5
  4. ISLR9.7.3
  5. Extra Practice: ISLR 5.4.2, 6.8.4, 8.4.4, 9.7.2

Appendix SMOTE算法代写

Weka for Mac users:

  1. Download JDK 9 from http://www.oracle.com/technetwork/java/javase/downloads/html
  2. Add environment variables in Terminal using :vi~/.bash_profile

(a)export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk-9.0.4.jdk/Contents/ Home

(b)exportPATH=$JAVA_HOME/bin:$PATH

  1. RestartTerminal
  2. Get brew (package installer for Mac, if you don’t have it) and install python (not necessary)

7http://fracpete.github.io/python-weka-wrapper/install.html. may help.

8If you did not start doing this homework on time, downsample the common class to 6,000 so that you

have 12,000 data points after applying SMOTE. Remember that the purpose of this homework is to apply SMOTE to the whole dataset.SMOTE算法代写

  1. brew installpkg-config
  2. brew  installgraphviz
  3. pipinstall javabridge
  4. pip installpython-weka-wrapper

And you should be able to use WEKA in your Jupyter Notebooks.

SMOTE算法代写
SMOTE算法代写

其他更多:C++代写 r代写 代码代写 金融代写  考试助攻 C语言代写 finance代写 lab代写 计算机代写 code代写 data代写 report代写 代写CS matlab代写

合作平台:天才代写 幽灵代写 写手招聘 Essay代写

 

天才代写-代写联系方式