Welcome to the second project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO'
statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Your goal for this project is to identify students who might need early intervention before they fail to graduate. Which type of supervised learning problem is this, classification or regression? Why?
Answer: To answer which kind of supervised learning problem this is, we must check what kind of variable we are trying to predict. In this case, the goal is to predict whether or not students need early intervention because they may fail to graduate. This sounds like a boolean variable. We try to divide students into two distinct categories, those who need intervention and those who don't. Therefore, this is a classification problem.
The answer would be different if we were to predict a real-valued score supposed to indicate how much a student needs intervention. In this case, we would have a regression problem. However, I assume a boolean target variable and say classification.
Side note for the interested project reviewer: As we see below, the target variable we predict is whether or not students will fail to graduate, not whether they need intervention. Thus, implicitly, we assume a student failing to graduate is equivalent to a student needing intervention. That is, there exist no students who will fail but need no intervention, say because an intervention would be pointless since since its chance of success is marginal for this subgroup of students. Imho: A very positive idea of man embodied in this project. I like that :)
Run the code cell below to load necessary Python libraries and load the student data. Note that the last column from this dataset, 'passed'
, will be our target label (whether the student graduated or didn't graduate). All other columns are features about each student.
# Import libraries
import numpy as np
import pandas as pd
from time import time
from sklearn.metrics import f1_score
# Read student data
student_data = pd.read_csv("student-data.csv")
print("Student data read successfully!")
Let's begin by investigating the dataset to determine how many students we have information on, and learn about the graduation rate among these students. In the code cell below, you will need to compute the following:
n_students
.n_features
.n_passed
.n_failed
.grad_rate
, in percent (%).# TODO: Calculate number of students
n_students = len(student_data)
# TODO: Calculate number of features
n_features = len(student_data.columns) - 1 # 30 feature columns, one target column
# TODO: Calculate passing students
n_passed = len(student_data[student_data['passed'] == 'yes'])
# TODO: Calculate failing students
n_failed = n_students - n_passed
# TODO: Calculate graduation rate
grad_rate = float(n_passed) / float(n_students) * 100
# Print the results
print("Total number of students: {}".format(n_students))
print("Number of features: {}".format(n_features))
print("Number of students who passed: {}".format(n_passed))
print("Number of students who failed: {}".format(n_failed))
print("Graduation rate of the class: {:.2f}%".format(grad_rate))
In this section, we will prepare the data for modeling, training and testing.
It is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with.
Run the code cell below to separate the student data into feature and target columns to see if any features are non-numeric.
# Extract feature columns
feature_cols = list(student_data.columns[:-1])
# Extract target column 'passed'
target_col = student_data.columns[-1]
# Show the list of columns
print("Feature columns:\n{}".format(feature_cols))
print("\nTarget column: {}".format(target_col))
# Separate the data into feature data and target data (X_all and y_all, respectively)
X_all = student_data[feature_cols]
y_all = student_data[target_col]
# Show the feature information by printing the first five rows
print("\nFeature values:")
print(X_all.head())
As you can see, there are several non-numeric columns that need to be converted! Many of them are simply yes
/no
, e.g. internet
. These can be reasonably converted into 1
/0
(binary) values.
Other columns, like Mjob
and Fjob
, have more than two values, and are known as categorical variables. The recommended way to handle such a column is to create as many columns as possible values (e.g. Fjob_teacher
, Fjob_other
, Fjob_services
, etc.), and assign a 1
to one of them and 0
to all others.
These generated columns are sometimes called dummy variables, and we will use the pandas.get_dummies()
function to perform this transformation. Run the code cell below to perform the preprocessing routine discussed in this section.
def preprocess_features(X):
''' Preprocesses the student data and converts non-numeric binary variables into
binary (0/1) variables. Converts categorical variables into dummy variables. '''
# Initialize new output DataFrame
output = pd.DataFrame(index = X.index)
# Investigate each feature column for the data
for col, col_data in X.iteritems():
# If data type is non-numeric, replace all yes/no values with 1/0
if col_data.dtype == object:
col_data = col_data.replace(['yes', 'no'], [1, 0])
# If data type is categorical, convert to dummy variables
if col_data.dtype == object:
# Example: 'school' => 'school_GP' and 'school_MS'
col_data = pd.get_dummies(col_data, prefix = col)
# Collect the revised columns
output = output.join(col_data)
return output
X_all = preprocess_features(X_all)
print("Processed feature columns ({} total features):\n{}".format(len(X_all.columns), list(X_all.columns)))
So far, we have converted all categorical features into numeric values. For the next step, we split the data (both features and corresponding labels) into training and test sets. In the following code cell below, you will need to implement the following:
X_all
, y_all
) into training and testing subsets.random_state
for the function(s) you use, if provided.X_train
, X_test
, y_train
, and y_test
.# TODO: Import any additional functionality you may need here
from sklearn.cross_validation import train_test_split
# TODO: Set the number of training points
num_train = 300
# Set the number of testing points
num_test = X_all.shape[0] - num_train
# TODO: Shuffle and split the dataset into the number of training and testing points above
X_train, X_test, y_train, y_test= train_test_split(X_all, y_all, train_size=num_train, random_state=42)
# Show the results of the split
print("Training set has {} samples.".format(X_train.shape[0]))
print("Testing set has {} samples.".format(X_test.shape[0]))
In this section, you will choose 3 supervised learning models that are appropriate for this problem and available in scikit-learn
. You will first discuss the reasoning behind choosing these three models by considering what you know about the data and each model's strengths and weaknesses. You will then fit the model to varying sizes of training data (100 data points, 200 data points, and 300 data points) and measure the F1 score. You will need to produce three tables (one for each model) that shows the training set size, training time, prediction time, F1 score on the training set, and F1 score on the testing set.
The following supervised learning models are currently available in scikit-learn
that you may choose from:
List three supervised learning models that are appropriate for this problem. For each model chosen
Answer:
Logistic regression (LR): the simple and fast linear baseline model.
Support Vector Machine (SVM): the more complex non-linear contender --- I assume we are talking about non-linear SVMs here (RBF-kernel and the like). Purely linear SVMs often perform about as good as logistic regression in my personal experience and also according to Andrej Karpathy's lecture on computer vision. I do not expect any significant difference to the logistic regression classifier for a linear kernel.
GradientBoostingClassifier (GBC): the computationally intensive ensemble method
Run the code cell below to initialize three helper functions which you can use for training and testing the three supervised learning models you've chosen above. The functions are as follows:
train_classifier
- takes as input a classifier and training data and fits the classifier to the data.predict_labels
- takes as input a fit classifier, features, and a target labeling and makes predictions using the F1 score.train_predict
- takes as input a classifier, and the training and testing data, and performs train_clasifier
and predict_labels
.def train_classifier(clf, X_train, y_train):
''' Fits a classifier to the training data. '''
# Start the clock, train the classifier, then stop the clock
start = time()
clf.fit(X_train, y_train)
end = time()
# Print the results
print("Trained model in {:.4f} seconds".format(end - start))
def predict_labels(clf, features, target):
''' Makes predictions using a fit classifier based on F1 score. '''
# Start the clock, make predictions, then stop the clock
start = time()
y_pred = clf.predict(features)
end = time()
# Print and return results
print("Made predictions in {:.4f} seconds.".format(end - start))
return f1_score(target.values, y_pred, pos_label='yes')
def train_predict(clf, X_train, y_train, X_test, y_test):
''' Train and predict using a classifer based on F1 score. '''
# Indicate the classifier and the training set size
print("Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train)))
# Train the classifier
train_classifier(clf, X_train, y_train)
# Print the results of prediction for both training and testing
print("F1 score for training set: {:.4f}.".format(predict_labels(clf, X_train, y_train)))
print("F1 score for test set: {:.4f}.".format(predict_labels(clf, X_test, y_test)))
With the predefined functions above, you will now import the three supervised learning models of your choice and run the train_predict
function for each one. Remember that you will need to train and predict on each classifier for three different training set sizes: 100, 200, and 300. Hence, you should expect to have 9 different outputs below — 3 for each model using the varying training set sizes. In the following code cell, you will need to implement the following:
clf_A
, clf_B
, and clf_C
.random_state
for each model you use, if provided.X_train
and y_train
.# TODO: Import the three supervised learning models from sklearn
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
my_random_state=42
# TODO: Initialize the three models
clf_A = LogisticRegression(random_state=my_random_state)
clf_B = SVC(random_state=my_random_state)
clf_C = GradientBoostingClassifier(random_state=my_random_state)
classifiers = [clf_A, clf_B, clf_C]
# TODO: Set up the training set sizes
X_train_100 = X_train[:100]
y_train_100 = y_train[:100]
X_train_200 = X_train[:200]
y_train_200 = y_train[:200]
X_train_300 = X_train[:300]
y_train_300 = y_train[:300]
train_sets = [(X_train_100, y_train_100), (X_train_200, y_train_200), (X_train_300, y_train_300)]
# TODO: Execute the 'train_predict' function for each classifier and each training set size
# train_predict(clf, X_train, y_train, X_test, y_test)
for clf in classifiers:
for X, y in train_sets:
train_predict(clf, X, y, X_test, y_test)
Classifer 1 - Logistic Regression
Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
---|---|---|---|---|
100 | 0.0025 | 0.0004 | 0.8593 | 0.7647 |
200 | 0.0028 | 0.0003 | 0.8562 | 0.7914 |
300 | 0.0044 | 0.0005 | 0.8468 | 0.8060 |
Classifer 2 - Support Vector Machine
Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
---|---|---|---|---|
100 | 0.0023 | 0.0014 | 0.8777 | 0.7746 |
200 | 0.0119 | 0.0020 | 0.8679 | 0.7815 |
300 | 0.0084 | 0.0016 | 0.8761 | 0.7838 |
Classifer 3 - Gradient Boosting Classifier
Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
---|---|---|---|---|
100 | 0.0519 | 0.0005 | 1.0000 | 0.7519 |
200 | 0.0725 | 0.0008 | 0.9964 | 0.7591 |
300 | 0.1052 | 0.0011 | 0.9739 | 0.7794 |
In this final section, you will choose from the three supervised learning models the best model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (X_train
and y_train
) by tuning at least one parameter to improve upon the untuned model's F1 score.
Based on the experiments you performed earlier, in one to two paragraphs, explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance?
Answer: Dear board of supervisors,
I recommend you go for the Logistic Regression classifier. My preliminary experiments confirm that it is in fact the fastest of all three models (in terms of both training and prediction time). When using it, you will save valuable milliseconds of CPU time which you can spend on higher value computations.
Moreover, Logistic Regression also seems to be the most effective of all tested models. The f1 score on the test set is higest, suggesting that it will perform best on future students. Moreover, the spread between F1 scores on train and test set are lowest. This is an indicator that LR is not overfitting to the small training set as much as the other models do.
##################### Helper function for next task #########################
# --- I now define a function that plots the Logistic Regression decision boundary for a 2D problem ---
# --- The plot will be visual aid for the next task ---
###############################################################################
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import fbeta_score, make_scorer
from sklearn.feature_selection import SelectFromModel
import matplotlib.pyplot as plt
%matplotlib inline
def plot_2d_decision_surface():
# train with lasso for feature selection
parameters = [
{'C': np.logspace(-2, 4, num=10)}
]
grid_lasso = GridSearchCV(LogisticRegression(penalty='l1', random_state=42), parameters, scoring=make_scorer(f1_score, pos_label='yes'), cv=5, verbose=0)
grid_lasso.fit(X_train, y_train)
sfm = SelectFromModel(grid_lasso.best_estimator_, threshold=0.10, prefit=True)
# get the two most important features
n_features = sfm.transform(X).shape[1]
while n_features > 2:
sfm.threshold += 0.1
X_transform = sfm.transform(X)
n_features = X_transform.shape[1]
# train L2 model on the two features
lr_predictor = LogisticRegression(C=1e5)
lr_predictor.fit(X_transform, y_train)
# prepare meshgrid
h = .02
x_min, x_max = X_transform[:, 0].min() - .5, X_transform[:, 0].max() + .5
y_min, y_max = X_transform[:, 1].min() - .5, X_transform[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = lr_predictor.predict(np.c_[xx.ravel(), yy.ravel()])
Z[Z=='yes'] = 1
Z[Z=='no'] = 0
Z = Z.reshape(xx.shape)
# scatterplot of data + decision boundary
plt.figure(figsize=(7, 7))
plt.title("Prediction with two most important features")
plt.contourf(xx, yy, Z, alpha=0.3, cmap=plt.cm.Paired)
feature1 = X_transform[:, 0] + np.random.randn(len(X_transform[:, 0])) / 10.0 # add small noise for visualization since data is actually categorical
feature2 = X_transform[:, 1] + np.random.randn(len(X_transform[:, 1])) / 10.0 # add small noise for visualization since data is actually categorical
colors = y_train.copy()
colors[colors=='yes'] = 1.00
colors[colors=='no'] = 0.00
plt.scatter(feature1, feature2, c=colors.values, alpha=0.6, cmap=plt.cm.Paired)
plt.xlabel(X_train.columns[sfm.get_support()][0])
plt.ylabel(X_train.columns[sfm.get_support()][1])
plt.xlim([np.min(feature1) - 0.5, np.max(feature1) + 0.5])
plt.ylim([np.min(feature2) - 0.5, np.max(feature2) + 0.5])
plt.show()
In one to two paragraphs, explain to the board of directors in layman's terms how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical or technical jargon, such as describing equations or discussing the algorithm implementation.
Answer:
The model I have created for you is able to predict how likely a student is to graduate. It does so by analysing the 395 that already went through the program. By looking at the attributes of these students, it identifies cues that have been idicative for failing to graduate in the past. We can look for these cues in future students to get our intervention system. To get a feeling for how the model works, consider the picture below. Image you record two attributes for your students:
You would probably expect both numbers to drive the chance of failing to graduate. You could then verify this expectation by drawing a picture like the one below. For each student, you would draw a blue dot if he failed to graduate and a red dot if he did not. You would draw dots more to the right the higher the failures attribute value is and the more to the top the higher the goout attribute value is. If your expectation is correct, you would see more blue dots (= studentes failed to graduate) in the upper right corner of your picture. Looking at the picture, you indeed see more blue dots than red ones there. You could now draw a line through the picture to seperate two areas, one in which mostly blue dots are and another in which more red than blue dots are. In the picture, you can see an example of such a line. You now have two areas, one in which students are likely to fail to graduate and the other in which they are likely to pass. Drawing a line is what the model I've created for you does when it learns from data.
If you see a new student now, then you can draw a new dot into the picture according to his failure and goout values and see where it the dot is. You would now check in which of the two areas the student is. Depending on that, you would either estimate he is likely to fail to graduate or not. The closer the dot is to the line, the less confident you will be about the prediction, i.e., you will assign higher probabilities to points far away from the line. To see why, check the dots in the picture. Close to the line, you have many blue and red dots on either side, so you cannot be sure. Deep inside the blue-shaded area though, most dots are blue. Similarly, deep inside the yellow-shaded area, most dots are red (exceptions exist of course). Checking on which side of the line a point is and how far it is away is what the model I've created for you does when it predicts the chance that a student fails to graduate.
The reason you need the model and not just a picture is that the model does above described things mathematically and not in pictures, which is why it can look at more than just two attributes. If we were to look at, say, five attributes, we could not draw a picture anymore. The model however can perfectly work with the five attributes.
plot_2d_decision_surface()
Fine tune the chosen model. Use grid search (GridSearchCV
) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:
sklearn.grid_search.gridSearchCV
and sklearn.metrics.make_scorer
.parameters = {'parameter' : [list of values]}
.clf
.make_scorer
and store it in f1_scorer
.pos_label
parameter to the correct value!clf
using f1_scorer
as the scoring method, and store it in grid_obj
.X_train
, y_train
), and store it in grid_obj
.# TODO: Import 'GridSearchCV' and 'make_scorer'
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import fbeta_score, make_scorer
import numpy as np
# TODO: Create the parameters list you wish to tune
parameters = [
{'C': np.logspace(-4, 10, num=100),
'class_weight': ['balanced', None]}
]
# TODO: Initialize the classifier
clf = LogisticRegression(random_state=42)
# TODO: Make an f1 scoring function using 'make_scorer'
f1_scorer = make_scorer(f1_score, pos_label='yes')
# TODO: Perform grid search on the classifier using the f1_scorer as the scoring method
grid_obj = GridSearchCV(clf, parameters, scoring=f1_scorer, cv=5, verbose=0)
# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_obj.fit(X_train, y_train)
# Get the estimator
clf = grid_obj.best_estimator_
print("Best score on hold-out data during grid search: {:.4f}".format(grid_obj.best_score_))
# Report the final F1 score for training and testing after parameter tuning
print("Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train)))
print("Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test)))
What is the final model's F1 score for training and testing? How does that score compare to the untuned model?
Answer: The final F1 score on the training data is 0.8358. Sadly, the final F1 on the test data is only 0.7778, which is much less than I hoped it would be. In particular, the F1 score on the test data during my preliminary experiments above was 0.8060. It seems as if the preliminary results gave me a good result only by chance.
In addition to the scores defined by Udacity, I've also printed out the best score during grid search in the cell above. It was 0.8210. Thus, CV search seems to work well. There are mutiple explanations for this drop in performance on the test set:
Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to
File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.