{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# # Classifiers introduction\n", "\n", "In the following program we introduce the basic steps of classification of a dataset in a matrix" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Import the package for learning and modeling trees" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "scrolled": true }, "outputs": [], "source": [ "from sklearn import tree" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Define the matrix containing the data (one example per row)\n", "and the vector containing the corresponding target value" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "X = [[0, 0, 0], [1, 1, 1], [0, 1, 0], [0, 0, 1], [1, 1, 0], [1, 0, 1]]\n", "Y = [1, 0, 0, 0, 1, 1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Declare the classification model you want to use and then fit the model to the data" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "clf = tree.DecisionTreeClassifier()\n", "clf = clf.fit(X, Y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Predict the target value (and print it) for the passed data, using the fitted model currently in clf" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0]\n" ] } ], "source": [ "print(clf.predict([[0, 1, 1]]))" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[1 0]\n" ] } ], "source": [ "print(clf.predict([[1, 0, 1],[0, 0, 1]]))" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "Tree\r\n", "\r\n", "\r\n", "0\r\n", "\r\n", "X[0] <= 0.5\r\n", "gini = 0.5\r\n", "samples = 6\r\n", "value = [3, 3]\r\n", "\r\n", "\r\n", "1\r\n", "\r\n", "X[1] <= 0.5\r\n", "gini = 0.444\r\n", "samples = 3\r\n", "value = [2, 1]\r\n", "\r\n", "\r\n", "0->1\r\n", "\r\n", "\r\n", "True\r\n", "\r\n", "\r\n", "6\r\n", "\r\n", "X[2] <= 0.5\r\n", "gini = 0.444\r\n", "samples = 3\r\n", "value = [1, 2]\r\n", "\r\n", "\r\n", "0->6\r\n", "\r\n", "\r\n", "False\r\n", "\r\n", "\r\n", "2\r\n", "\r\n", "X[2] <= 0.5\r\n", "gini = 0.5\r\n", "samples = 2\r\n", "value = [1, 1]\r\n", "\r\n", "\r\n", "1->2\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "5\r\n", "\r\n", "gini = 0.0\r\n", "samples = 1\r\n", "value = [1, 0]\r\n", "\r\n", "\r\n", "1->5\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "3\r\n", "\r\n", "gini = 0.0\r\n", "samples = 1\r\n", "value = [0, 1]\r\n", "\r\n", "\r\n", "2->3\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "4\r\n", "\r\n", "gini = 0.0\r\n", "samples = 1\r\n", "value = [1, 0]\r\n", "\r\n", "\r\n", "2->4\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "7\r\n", "\r\n", "gini = 0.0\r\n", "samples = 1\r\n", "value = [0, 1]\r\n", "\r\n", "\r\n", "6->7\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "8\r\n", "\r\n", "X[1] <= 0.5\r\n", "gini = 0.5\r\n", "samples = 2\r\n", "value = [1, 1]\r\n", "\r\n", "\r\n", "6->8\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "9\r\n", "\r\n", "gini = 0.0\r\n", "samples = 1\r\n", "value = [0, 1]\r\n", "\r\n", "\r\n", "8->9\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "10\r\n", "\r\n", "gini = 0.0\r\n", "samples = 1\r\n", "value = [1, 0]\r\n", "\r\n", "\r\n", "8->10\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "\r\n" ], "text/plain": [ "" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import os\n", "os.environ[\"PATH\"] += os.pathsep + 'C:/Users/galat/.conda/envs/aaut/Library/bin/graphviz'\n", "import graphviz\n", "dot_data = tree.export_graphviz(clf, out_file=None) \n", "graph = graphviz.Source(dot_data) \n", "graph" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the following we start using a dataset (from UCI Machine Learning repository)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import load_iris\n", "iris = load_iris()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Declare the type of prediction model and the working criteria for the model induction algorithm" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "clf = tree.DecisionTreeClassifier(criterion=\"entropy\",random_state=300,min_samples_leaf=5,class_weight={0:1,1:1,2:1})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Split the dataset in training and test set" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "# Generate a random permutation of the indices of examples that will be later used \n", "# for the training and the test set\n", "import numpy as np\n", "np.random.seed(1231)\n", "indices = np.random.permutation(len(iris.data))\n", "\n", "# We now decide to keep the last 10 indices for test set, the remaining for the training set\n", "indices_training=indices[:-10]\n", "indices_test=indices[-10:]\n", "\n", "iris_X_train = iris.data[indices_training] # keep for training all the matrix elements with the exception of the last 10 \n", "iris_y_train = iris.target[indices_training]\n", "iris_X_test = iris.data[indices_test] # keep the last 10 elements for test set\n", "iris_y_test = iris.target[indices_test]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Fit the learning model on training set" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "# fit the model to the training data\n", "clf = clf.fit(iris_X_train, iris_y_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Obtain predictions" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Predictions:\n", "[0 0 0 1 0 0 1 2 0 0]\n", "True classes:\n", "[0 0 0 2 0 0 1 1 0 0]\n", "['setosa' 'versicolor' 'virginica']\n" ] } ], "source": [ "# apply fitted model \"clf\" to the test set \n", "predicted_y_test = clf.predict(iris_X_test)\n", "\n", "# print the predictions (class numbers associated to classes names in target names)\n", "print(\"Predictions:\")\n", "print(predicted_y_test)\n", "print(\"True classes:\")\n", "print(iris_y_test) \n", "print(iris.target_names)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Print the index of the test instances and the corresponding predictions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Look at the specific examples" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Instance # 33: \n", "sepal length (cm)=5.5, sepal width (cm)=4.2, petal length (cm)=1.4, petal width (cm)=0.2\n", "Predicted: setosa\t True: setosa\n", "\n", "Instance # 2: \n", "sepal length (cm)=4.7, sepal width (cm)=3.2, petal length (cm)=1.3, petal width (cm)=0.2\n", "Predicted: setosa\t True: setosa\n", "\n", "Instance # 11: \n", "sepal length (cm)=4.8, sepal width (cm)=3.4, petal length (cm)=1.6, petal width (cm)=0.2\n", "Predicted: setosa\t True: setosa\n", "\n", "Instance # 126: \n", "sepal length (cm)=6.2, sepal width (cm)=2.8, petal length (cm)=4.8, petal width (cm)=1.8\n", "Predicted: versicolor\t True: virginica\n", "\n", "Instance # 49: \n", "sepal length (cm)=5.0, sepal width (cm)=3.3, petal length (cm)=1.4, petal width (cm)=0.2\n", "Predicted: setosa\t True: setosa\n", "\n", "Instance # 10: \n", "sepal length (cm)=5.4, sepal width (cm)=3.7, petal length (cm)=1.5, petal width (cm)=0.2\n", "Predicted: setosa\t True: setosa\n", "\n", "Instance # 85: \n", "sepal length (cm)=6.0, sepal width (cm)=3.4, petal length (cm)=4.5, petal width (cm)=1.6\n", "Predicted: versicolor\t True: versicolor\n", "\n", "Instance # 52: \n", "sepal length (cm)=6.9, sepal width (cm)=3.1, petal length (cm)=4.9, petal width (cm)=1.5\n", "Predicted: virginica\t True: versicolor\n", "\n", "Instance # 5: \n", "sepal length (cm)=5.4, sepal width (cm)=3.9, petal length (cm)=1.7, petal width (cm)=0.4\n", "Predicted: setosa\t True: setosa\n", "\n", "Instance # 21: \n", "sepal length (cm)=5.1, sepal width (cm)=3.7, petal length (cm)=1.5, petal width (cm)=0.4\n", "Predicted: setosa\t True: setosa\n", "\n" ] } ], "source": [ "for i in range(len(iris_y_test)): \n", " print(\"Instance # \"+str(indices_test[i])+\": \")\n", " s=\"\"\n", " for j in range(len(iris.feature_names)):\n", " s=s+iris.feature_names[j]+\"=\"+str(iris_X_test[i][j])\n", " if (j\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "Tree\r\n", "\r\n", "\r\n", "0\r\n", "\r\n", "petal length (cm) ≤ 2.45\r\n", "entropy = 1.585\r\n", "samples = 150\r\n", "value = [50, 50, 50]\r\n", "class = setosa\r\n", "\r\n", "\r\n", "1\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 50\r\n", "value = [50, 0, 0]\r\n", "class = setosa\r\n", "\r\n", "\r\n", "0->1\r\n", "\r\n", "\r\n", "True\r\n", "\r\n", "\r\n", "2\r\n", "\r\n", "petal width (cm) ≤ 1.75\r\n", "entropy = 1.0\r\n", "samples = 100\r\n", "value = [0, 50, 50]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "0->2\r\n", "\r\n", "\r\n", "False\r\n", "\r\n", "\r\n", "3\r\n", "\r\n", "petal length (cm) ≤ 4.95\r\n", "entropy = 0.445\r\n", "samples = 54\r\n", "value = [0, 49, 5]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "2->3\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "8\r\n", "\r\n", "petal length (cm) ≤ 4.95\r\n", "entropy = 0.151\r\n", "samples = 46\r\n", "value = [0, 1, 45]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "2->8\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "4\r\n", "\r\n", "sepal length (cm) ≤ 5.15\r\n", "entropy = 0.146\r\n", "samples = 48\r\n", "value = [0, 47, 1]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "3->4\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "7\r\n", "\r\n", "entropy = 0.918\r\n", "samples = 6\r\n", "value = [0, 2, 4]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "3->7\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "5\r\n", "\r\n", "entropy = 0.722\r\n", "samples = 5\r\n", "value = [0, 4, 1]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "4->5\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "6\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 43\r\n", "value = [0, 43, 0]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "4->6\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "9\r\n", "\r\n", "entropy = 0.65\r\n", "samples = 6\r\n", "value = [0, 1, 5]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "8->9\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "10\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 40\r\n", "value = [0, 0, 40]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "8->10\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "\r\n" ], "text/plain": [ "" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dot_data = tree.export_graphviz(clf, out_file=None, \n", " feature_names=iris.feature_names, \n", " class_names=iris.target_names, \n", " filled=True, rounded=True, \n", " special_characters=True) \n", "graph = graphviz.Source(dot_data) \n", "graph" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 1. Artificial inflation" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Generate a random permutation of the indices of examples that will be later used \n", "# for the training and the test set\n", "import numpy as np\n", "np.random.seed(1231)\n", "indices = np.random.permutation(len(iris.data))\n", "\n", "# We now decide to keep the last 10 indices for test set, the remaining for the training set\n", "indices_training=indices[:-10]\n", "indices_test=indices[-10:]\n", "\n", "iris_X_train = iris.data[indices_training] # keep for training all the matrix elements with the exception of the last 10 \n", "iris_y_train = iris.target[indices_training]\n", "iris_X_test = iris.data[indices_test] # keep the last 10 elements for test set\n", "iris_y_test = iris.target[indices_test]" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "samples_x = []\n", "samples_y = []\n", "for i in range(0, len(iris_y_train)):\n", " if iris_y_train[i] == 1:\n", " for _ in range(9):\n", " samples_x.append(iris_X_train[i])\n", " samples_y.append(1)\n", " elif iris_y_train[i] == 2:\n", " for _ in range(9):\n", " samples_x.append(iris_X_train[i])\n", " samples_y.append(2)\n", "\n", "#Samples inflation\n", "iris_X_train = np.append(iris_X_train, samples_x, axis = 0)\n", "iris_y_train = np.append(iris_y_train, samples_y, axis = 0)" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Accuracy: 1.0\n", "F1: 1.0\n" ] } ], "source": [ "clf = tree.DecisionTreeClassifier(criterion=\"entropy\",random_state=300,min_samples_leaf=10,class_weight={0:1,1:1,2:1})\n", "clf = clf.fit(iris_X_train, iris_y_train)\n", "predicted_y_test = clf.predict(iris_X_test)\n", "acc_score = accuracy_score(iris_y_test, predicted_y_test)\n", "f1 = f1_score(iris_y_test, predicted_y_test, average='macro')\n", "print(\"Accuracy: \", acc_score)\n", "print(\"F1: \", f1)" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "Tree\r\n", "\r\n", "\r\n", "0\r\n", "\r\n", "petal length (cm) ≤ 4.85\r\n", "entropy = 1.211\r\n", "samples = 1013\r\n", "value = [43, 480, 490]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "1\r\n", "\r\n", "petal length (cm) ≤ 2.45\r\n", "entropy = 0.648\r\n", "samples = 513\r\n", "value = [43, 450, 20]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "0->1\r\n", "\r\n", "\r\n", "True\r\n", "\r\n", "\r\n", "8\r\n", "\r\n", "petal width (cm) ≤ 1.75\r\n", "entropy = 0.327\r\n", "samples = 500\r\n", "value = [0, 30, 470]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "0->8\r\n", "\r\n", "\r\n", "False\r\n", "\r\n", "\r\n", "2\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 43\r\n", "value = [43, 0, 0]\r\n", "class = setosa\r\n", "\r\n", "\r\n", "1->2\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "3\r\n", "\r\n", "petal width (cm) ≤ 1.65\r\n", "entropy = 0.254\r\n", "samples = 470\r\n", "value = [0, 450, 20]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "1->3\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "4\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 440\r\n", "value = [0, 440, 0]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "3->4\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "5\r\n", "\r\n", "sepal width (cm) ≤ 3.1\r\n", "entropy = 0.918\r\n", "samples = 30\r\n", "value = [0, 10, 20]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "3->5\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "6\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 20\r\n", "value = [0, 0, 20]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "5->6\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "7\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 10\r\n", "value = [0, 10, 0]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "5->7\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "9\r\n", "\r\n", "petal length (cm) ≤ 5.35\r\n", "entropy = 0.985\r\n", "samples = 70\r\n", "value = [0, 30, 40]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "8->9\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "16\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 430\r\n", "value = [0, 0, 430]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "8->16\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "10\r\n", "\r\n", "petal width (cm) ≤ 1.55\r\n", "entropy = 0.971\r\n", "samples = 50\r\n", "value = [0, 30, 20]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "9->10\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "15\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 20\r\n", "value = [0, 0, 20]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "9->15\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "11\r\n", "\r\n", "petal length (cm) ≤ 4.95\r\n", "entropy = 0.918\r\n", "samples = 30\r\n", "value = [0, 10, 20]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "10->11\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "14\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 20\r\n", "value = [0, 20, 0]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "10->14\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "12\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 10\r\n", "value = [0, 10, 0]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "11->12\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "13\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 20\r\n", "value = [0, 0, 20]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "11->13\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "\r\n" ], "text/plain": [ "" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dot_data = tree.export_graphviz(clf, out_file=None, \n", " feature_names=iris.feature_names, \n", " class_names=iris.target_names, \n", " filled=True, rounded=True, \n", " special_characters=True) \n", "graph = graphviz.Source(dot_data) \n", "graph" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 2. Class weights" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "# Generate a random permutation of the indices of examples that will be later used \n", "# for the training and the test set\n", "import numpy as np\n", "np.random.seed(1231)\n", "indices = np.random.permutation(len(iris.data))\n", "\n", "# We now decide to keep the last 10 indices for test set, the remaining for the training set\n", "indices_training=indices[:-10]\n", "indices_test=indices[-10:]\n", "\n", "iris_X_train = iris.data[indices_training] # keep for training all the matrix elements with the exception of the last 10 \n", "iris_y_train = iris.target[indices_training]\n", "iris_X_test = iris.data[indices_test] # keep the last 10 elements for test set\n", "iris_y_test = iris.target[indices_test]" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Accuracy: 0.8\n", "F1: 0.5\n" ] } ], "source": [ "clf = tree.DecisionTreeClassifier(criterion=\"entropy\",random_state=300,min_samples_leaf=5,class_weight={0:1,1:10,2:10})\n", "clf = clf.fit(iris_X_train, iris_y_train)\n", "predicted_y_test = clf.predict(iris_X_test)\n", "acc_score = accuracy_score(iris_y_test, predicted_y_test)\n", "f1 = f1_score(iris_y_test, predicted_y_test, average='macro')\n", "print(\"Accuracy: \", acc_score)\n", "print(\"F1: \", f1)" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "Tree\r\n", "\r\n", "\r\n", "0\r\n", "\r\n", "petal length (cm) ≤ 4.85\r\n", "entropy = 1.211\r\n", "samples = 140\r\n", "value = [43, 480, 490]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "1\r\n", "\r\n", "petal length (cm) ≤ 2.45\r\n", "entropy = 0.648\r\n", "samples = 90\r\n", "value = [43, 450, 20]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "0->1\r\n", "\r\n", "\r\n", "True\r\n", "\r\n", "\r\n", "8\r\n", "\r\n", "petal width (cm) ≤ 1.75\r\n", "entropy = 0.327\r\n", "samples = 50\r\n", "value = [0, 30, 470]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "0->8\r\n", "\r\n", "\r\n", "False\r\n", "\r\n", "\r\n", "2\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 43\r\n", "value = [43, 0, 0]\r\n", "class = setosa\r\n", "\r\n", "\r\n", "1->2\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "3\r\n", "\r\n", "petal width (cm) ≤ 1.45\r\n", "entropy = 0.254\r\n", "samples = 47\r\n", "value = [0, 450, 20]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "1->3\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "4\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 35\r\n", "value = [0, 350, 0]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "3->4\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "5\r\n", "\r\n", "sepal length (cm) ≤ 6.1\r\n", "entropy = 0.65\r\n", "samples = 12\r\n", "value = [0, 100, 20]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "3->5\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "6\r\n", "\r\n", "entropy = 0.863\r\n", "samples = 7\r\n", "value = [0, 50, 20]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "5->6\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "7\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 5\r\n", "value = [0, 50, 0]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "5->7\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "9\r\n", "\r\n", "entropy = 0.985\r\n", "samples = 7\r\n", "value = [0, 30, 40]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "8->9\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "10\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 43\r\n", "value = [0, 0, 430]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "8->10\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "\r\n" ], "text/plain": [ "" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dot_data = tree.export_graphviz(clf, out_file=None, \n", " feature_names=iris.feature_names, \n", " class_names=iris.target_names, \n", " filled=True, rounded=True, \n", " special_characters=True) \n", "graph = graphviz.Source(dot_data) \n", "graph" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 3. Avoid overfitting" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "# Generate a random permutation of the indices of examples that will be later used \n", "# for the training and the test set\n", "import numpy as np\n", "np.random.seed(1231)\n", "indices = np.random.permutation(len(iris.data))\n", "\n", "# We now decide to keep the last 10 indices for test set, the remaining for the training set\n", "indices_training=indices[:-10]\n", "indices_test=indices[-10:]\n", "\n", "iris_X_train = iris.data[indices_training] # keep for training all the matrix elements with the exception of the last 10 \n", "iris_y_train = iris.target[indices_training]\n", "iris_X_test = iris.data[indices_test] # keep the last 10 elements for test set\n", "iris_y_test = iris.target[indices_test]" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Accuracy: 1.0\n", "F1: 1.0\n" ] } ], "source": [ "clf = tree.DecisionTreeClassifier(criterion=\"entropy\",random_state=300,min_samples_leaf=3,class_weight={0:1,1:10,2:10}, min_impurity_decrease = 0.005, max_depth = 4, max_leaf_nodes = 6)\n", "clf = clf.fit(iris_X_train, iris_y_train)\n", "predicted_y_test = clf.predict(iris_X_test)\n", "acc_score = accuracy_score(iris_y_test, predicted_y_test)\n", "f1 = f1_score(iris_y_test, predicted_y_test, average='macro')\n", "print(\"Accuracy: \", acc_score)\n", "print(\"F1: \", f1)" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "Tree\r\n", "\r\n", "\r\n", "0\r\n", "\r\n", "petal length (cm) ≤ 4.85\r\n", "entropy = 1.211\r\n", "samples = 140\r\n", "value = [43, 480, 490]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "1\r\n", "\r\n", "petal length (cm) ≤ 2.45\r\n", "entropy = 0.648\r\n", "samples = 90\r\n", "value = [43, 450, 20]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "0->1\r\n", "\r\n", "\r\n", "True\r\n", "\r\n", "\r\n", "2\r\n", "\r\n", "petal width (cm) ≤ 1.75\r\n", "entropy = 0.327\r\n", "samples = 50\r\n", "value = [0, 30, 470]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "0->2\r\n", "\r\n", "\r\n", "False\r\n", "\r\n", "\r\n", "3\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 43\r\n", "value = [43, 0, 0]\r\n", "class = setosa\r\n", "\r\n", "\r\n", "1->3\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "4\r\n", "\r\n", "petal width (cm) ≤ 1.65\r\n", "entropy = 0.254\r\n", "samples = 47\r\n", "value = [0, 450, 20]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "1->4\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "7\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 44\r\n", "value = [0, 440, 0]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "4->7\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "8\r\n", "\r\n", "entropy = 0.918\r\n", "samples = 3\r\n", "value = [0, 10, 20]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "4->8\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "5\r\n", "\r\n", "petal length (cm) ≤ 5.05\r\n", "entropy = 0.985\r\n", "samples = 7\r\n", "value = [0, 30, 40]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "2->5\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "6\r\n", "\r\n", "entropy = 0.0\r\n", "samples = 43\r\n", "value = [0, 0, 430]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "2->6\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "9\r\n", "\r\n", "entropy = 0.918\r\n", "samples = 3\r\n", "value = [0, 20, 10]\r\n", "class = versicolor\r\n", "\r\n", "\r\n", "5->9\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "10\r\n", "\r\n", "entropy = 0.811\r\n", "samples = 4\r\n", "value = [0, 10, 30]\r\n", "class = virginica\r\n", "\r\n", "\r\n", "5->10\r\n", "\r\n", "\r\n", "\r\n", "\r\n", "\r\n" ], "text/plain": [ "" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dot_data = tree.export_graphviz(clf, out_file=None, \n", " feature_names=iris.feature_names, \n", " class_names=iris.target_names, \n", " filled=True, rounded=True, \n", " special_characters=True) \n", "graph = graphviz.Source(dot_data) \n", "graph" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 4. Confusion Matrix" ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "array([[7, 0, 0],\n", " [0, 2, 0],\n", " [0, 0, 1]])" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# initializes the confusion matrix\n", "confusion = np.zeros([3, 3], dtype = int)\n", "\n", "# print the corresponding instances indexes and class names\n", "for i in range(len(iris_y_test)): \n", " #increments the indexed cell value\n", " confusion[iris_y_test[i], predicted_y_test[i]]+=1\n", "confusion" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 5. ROC Curves" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[(0.0, 43.0), (30.0, 0.0), (30.0, 0.0), (40.0, 0.0), (430.0, 0.0), (440.0, 0.0)], [(0.0, 440.0), (10.0, 20.0), (20.0, 10.0), (30.0, 10.0), (43.0, 0.0), (430.0, 0.0)], [(0.0, 430.0), (10.0, 30.0), (10.0, 20.0), (20.0, 10.0), (43.0, 0.0), (440.0, 0.0)]]\n" ] }, { "data": { "text/plain": [ "[[[0, 0.0, 30.0, 60.0, 100.0, 530.0, 970.0],\n", " [0, 43.0, 43.0, 43.0, 43.0, 43.0, 43.0]],\n", " [[0, 0.0, 10.0, 30.0, 60.0, 103.0, 533.0],\n", " [0, 440.0, 460.0, 470.0, 480.0, 480.0, 480.0]],\n", " [[0, 0.0, 10.0, 20.0, 40.0, 83.0, 523.0],\n", " [0, 430.0, 460.0, 480.0, 490.0, 490.0, 490.0]]]" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Calculates the ROC curves (x, y)\n", "leafs = []\n", "class_pairs = [[],[],[]]\n", "roc_curves = [[[0], [0]], [[0], [0]], [[0], [0]]]\n", "for i in range(clf.tree_.node_count):\n", " if (clf.tree_.feature[i] == -2):\n", " leafs.append(i)\n", "\n", "# c = class index\n", "for leaf in leafs:\n", " for c in range(3):\n", " #pairs(neg, pos)\n", " class_pairs[c].append((clf.tree_.value[leaf][0].sum() - clf.tree_.value[leaf][0][c], clf.tree_.value[leaf][0][c]))\n", "\n", "#pairs sorting\n", "for c in range(3):\n", " class_pairs[c] = sorted(class_pairs[c], key=lambda t: t[0]/max(1,t[1]))\n", "print(class_pairs)\n", "\n", "for i in range(1, len(leafs) + 1):\n", " for c in range(3):\n", " roc_curves[c][0].append(class_pairs[c][i - 1][0] + roc_curves[c][0][i - 1])\n", " roc_curves[c][1].append(class_pairs[c][i - 1][1] + roc_curves[c][1][i - 1])\n", "\n", "roc_curves" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAD4CAYAAAAXUaZHAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAM3UlEQVR4nO3dX4yl9V3H8fdHloK2qUAZyMqCC3FTISZdmgmCeGGgVMSmcIEJpNGNbrI3NVJtUkEvmiZelMSU1cQ03RTsxpQ/lRIhpJGQLcSYGOogSKFb3IVauoLskEKrXmixXy/Os3Tc7jLnnDmzs/Od9yuZzHme8xzO73d+5D3PPDOzJ1WFJKmXn1jrAUiSZs+4S1JDxl2SGjLuktSQcZekhjadyCc7++yza+vWrSfyKSVp3XvyySdfq6q5SR5zQuO+detWFhYWTuRTStK6l+Tbkz7GyzKS1JBxl6SGjLskNWTcJakh4y5JDRl3SWrIuEtSQyf099yntmcP3H33Wo9CkqazfTvs3n1Cn3J9nLnffTc8/fRaj0KS1o31ceYOo698jz++1qOQpHVhfZy5S5ImYtwlqSHjLkkNGXdJasi4S1JDxl2SGjLuktSQcZekhoy7JDU0dtyTnJLkqSQPD9sXJnkiyYEk9yV5x+oNU5I0iUnO3G8B9i/Zvh24o6q2Aa8DO2c5MEnS9MaKe5ItwK8Dnx+2A1wF3D8cshe4YTUGKEma3Lhn7ruBTwA/HLbfA7xRVW8O24eA8471wCS7kiwkWVhcXFzRYCVJ41k27kk+BByuqieX7j7GoXWsx1fVnqqar6r5ubm5KYcpSZrEOP/k75XAh5NcB5wOvJvRmfwZSTYNZ+9bgJdXb5iSpEkse+ZeVbdV1Zaq2grcBHy1qj4CPAbcOBy2A3hw1UYpSZrISn7P/Q+BP0hykNE1+DtnMyRJ0kpN9E5MVfU48Phw+0XgstkPSZK0Uv6FqiQ1ZNwlqSHjLkkNGXdJasi4S1JDxl2SGjLuktSQcZekhoy7JDVk3CWpIeMuSQ0Zd0lqyLhLUkPGXZIaMu6S1JBxl6SGjLskNWTcJakh4y5JDRl3SWrIuEtSQ8Zdkhoy7pLUkHGXpIaMuyQ1ZNwlqSHjLkkNGXdJasi4S1JDxl2SGjLuktSQcZekhoy7JDVk3CWpIeMuSQ0Zd0lqyLhLUkPLxj3J6Um+luSfkzyX5FPD/guTPJHkQJL7krxj9YcrSRrHOGfu/w1cVVXvA7YD1ya5HLgduKOqtgGvAztXb5iSpEksG/ca+c9h89Tho4CrgPuH/XuBG1ZlhJKkiY11zT3JKUmeBg4DjwIvAG9U1ZvDIYeA847z2F1JFpIsLC4uzmLMkqRljBX3qvrfqtoObAEuAy4+1mHHeeyeqpqvqvm5ubnpRypJGttEvy1TVW8AjwOXA2ck2TTctQV4ebZDkyRNa5zflplLcsZw+yeBDwD7gceAG4fDdgAPrtYgJUmT2bT8IWwG9iY5hdEXgy9V1cNJvgHcm+RPgKeAO1dxnJKkCSwb96p6Brj0GPtfZHT9XZJ0kvEvVCWpIeMuSQ0Zd0lqyLhLUkPGXZIaMu6S1JBxl6SGjLskNWTcJakh4y5JDRl3SWrIuEtSQ8Zdkhoy7pLUkHGXpIaMuyQ1ZNwlqSHjLkkNGXdJasi4S1JDxl2SGjLuktSQcZekhoy7JDVk3CWpIeMuSQ0Zd0lqyLhLUkPGXZIaMu6S1JBxl6SGjLskNWTcJakh4y5JDRl3SWrIuEtSQ8ZdkhpaNu5Jzk/yWJL9SZ5Lcsuw/6wkjyY5MHw+c/WHK0kaxzhn7m8CH6+qi4HLgY8muQS4FdhXVduAfcO2JOkksGzcq+qVqvqn4fZ/APuB84Drgb3DYXuBG1ZrkJKkyUx0zT3JVuBS4Ang3Kp6BUZfAIBzjvOYXUkWkiwsLi6ubLSSpLGMHfck7wK+DHysqr4/7uOqak9VzVfV/Nzc3DRjlCRNaKy4JzmVUdi/WFUPDLtfTbJ5uH8zcHh1hihJmtQ4vy0T4E5gf1V9ZsldDwE7hts7gAdnPzxJ0jQ2jXHMlcBvAl9P8vSw74+ATwNfSrITeAn4jdUZoiRpUsvGvar+Hshx7r56tsORJM2Cf6EqSQ0Zd0lqyLhLUkPGXZIaMu6S1JBxl6SGjLskNWTcJakh4y5JDRl3SWrIuEtSQ8Zdkhoy7pLUkHGXpIaMuyQ1ZNwlqSHjLkkNGXdJasi4S1JDxl2SGjLuktSQcZekhoy7JDVk3CWpIeMuSQ0Zd0lqyLhLUkPGXZIaMu6S1JBxl6SGjLskNWTcJakh4y5JDRl3SWrIuEtSQ8Zdkhoy7pLU0LJxT3JXksNJnl2y76wkjyY5MHw+c3WHKUmaxDhn7l8Arj1q363AvqraBuwbtiVJJ4ll415Vfwd896jd1wN7h9t7gRtmPC5J0gpMe8393Kp6BWD4fM7xDkyyK8lCkoXFxcUpn06SNIlV/4FqVe2pqvmqmp+bm1vtp5MkMX3cX02yGWD4fHh2Q5IkrdS0cX8I2DHc3gE8OJvhSJJmYZxfhbwH+AfgvUkOJdkJfBq4JskB4JphW5J0kti03AFVdfNx7rp6xmORJM2If6EqSQ0Zd0lqyLhLUkPGXZIaMu6S1JBxl6SGjLskNWTcJakh4y5JDRl3SWrIuEtSQ8Zdkhoy7pLUkHGXpIaMuyQ1ZNwlqSHjLkkNGXdJasi4S1JDxl2SGjLuktSQcZekhoy7JDVk3CWpIeMuSQ0Zd0lqyLhLUkPGXZIaMu6S1JBxl6SGjLskNWTcJakh4y5JDRl3SWrIuEtSQ8Zdkhoy7pLU0IrinuTaJM8nOZjk1lkNSpK0MlPHPckpwF8AvwZcAtyc5JJZDUySNL2VnLlfBhysqher6n+Ae4HrZzMsSdJKbFrBY88DvrNk+xDwi0cflGQXsAvgggsumO6Ztm+f7nGStEGtJO45xr76sR1Ve4A9APPz8z92/1h2757qYZK0Ua3ksswh4Pwl21uAl1c2HEnSLKwk7v8IbEtyYZJ3ADcBD81mWJKklZj6skxVvZnkd4FHgFOAu6rquZmNTJI0tZVcc6eqvgJ8ZUZjkSTNiH+hKkkNGXdJasi4S1JDxl2SGkrVdH9XNNWTJYvAt6d8+NnAazMcznri3DeejTpvcO7HmvvPVtXcJP+hExr3lUiyUFXzaz2OteDcN97cN+q8wbnPau5elpGkhoy7JDW0nuK+Z60HsIac+8azUecNzn0m1s01d0nS+NbTmbskaUzGXZIaWhdx7/xG3EnOT/JYkv1Jnktyy7D/rCSPJjkwfD5z2J8kfz68Fs8kef/azmDlkpyS5KkkDw/bFyZ5Ypj7fcM/KU2S04btg8P9W9dy3CuV5Iwk9yf55rD+V2yEdU/y+8P/688muSfJ6V3XPMldSQ4neXbJvonXOMmO4fgDSXaM89wnfdw3wBtxvwl8vKouBi4HPjrM71ZgX1VtA/YN2zB6HbYNH7uAz574Ic/cLcD+Jdu3A3cMc38d2Dns3wm8XlU/B9wxHLee/Rnwt1X188D7GL0Grdc9yXnA7wHzVfULjP658Jvou+ZfAK49at9Ea5zkLOCTjN7G9DLgk0e+ILytqjqpP4ArgEeWbN8G3LbW41rF+T4IXAM8D2we9m0Gnh9ufw64ecnxbx23Hj8YvYPXPuAq4GFGb9/4GrDp6PVn9N4BVwy3Nw3HZa3nMOW83w186+jxd193fvTey2cNa/gw8Kud1xzYCjw77RoDNwOfW7L//x13vI+T/sydY78R93lrNJZVNXzLeSnwBHBuVb0CMHw+Zzis2+uxG/gE8MNh+z3AG1X15rC9dH5vzX24/3vD8evRRcAi8JfDJanPJ3knzde9qv4N+FPgJeAVRmv4JBtjzY+YdI2nWvv1EPex3oh7vUvyLuDLwMeq6vtvd+gx9q3L1yPJh4DDVfXk0t3HOLTGuG+92QS8H/hsVV0K/Bc/+vb8WFrMfbiccD1wIfAzwDsZXY44Wsc1X87x5jrVa7Ae4t7+jbiTnMoo7F+sqgeG3a8m2Tzcvxk4POzv9HpcCXw4yb8C9zK6NLMbOCPJkXcJWzq/t+Y+3P/TwHdP5IBn6BBwqKqeGLbvZxT77uv+AeBbVbVYVT8AHgB+iY2x5kdMusZTrf16iHvrN+JOEuBOYH9VfWbJXQ8BR34qvoPRtfgj+39r+Mn65cD3jnyLt95U1W1VtaWqtjJa169W1UeAx4Abh8OOnvuR1+TG4fh1eRZXVf8OfCfJe4ddVwPfoP+6vwRcnuSnhv/3j8y7/ZovMekaPwJ8MMmZw3c+Hxz2vb21/mHDmD+QuA74F+AF4I/XejwzntsvM/oW6xng6eHjOkbXFfcBB4bPZw3Hh9FvD70AfJ3Rbx2s+Txm8Dr8CvDwcPsi4GvAQeCvgdOG/acP2weH+y9a63GvcM7bgYVh7f8GOHMjrDvwKeCbwLPAXwGndV1z4B5GP1v4AaMz8J3TrDHwO8NrcBD47XGe239+QJIaWg+XZSRJEzLuktSQcZekhoy7JDVk3CWpIeMuSQ0Zd0lq6P8Ag0s1ouK5vTQAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAD7CAYAAACRxdTpAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAQjUlEQVR4nO3da6xdZZ3H8e/PlosjKpceCLbVYigJvFDEwtSghIuDwKglBhIvkcY09g1j8JIojMbRZF7oGyEkE2IVpYoKKBIaQgZJucjEgB4EuYhKIUhrgRaBwsSIA/znxX6qh/a0Z7c9p6fn6feT7Ky1/uvZez//sPl1nXXWOjtVhSSpL6+Z7glIkiaf4S5JHTLcJalDhrskdchwl6QOGe6S1KGhwj3JY0nuT3JvktFWOzjJzUkebsuDWj1JLk2yJsl9SY6bygYkSVvbkSP3U6rq2Kpa1LYvBFZX1UJgddsGOBNY2B7Lgcsma7KSpOHM3oXnLgFObusrgduAL7T692pwd9SdSQ5McnhVPbGtF5ozZ04tWLBgF6YiSXufu+++++mqGhlv37DhXsDPkhTwzapaARy2ObCr6okkh7axc4G1Y567rtW2Ge4LFixgdHR0yKlIkgCS/HFb+4YN9xOran0L8JuT/G577zdObau/cZBkOYPTNrz5zW8echqSpGEMdc69qta35QbgOuAE4KkkhwO05YY2fB0wf8zT5wHrx3nNFVW1qKoWjYyM+1OFJGknTRjuSV6X5PWb14HTgQeAVcDSNmwpcH1bXwWc166aWQxs2t75dknS5BvmtMxhwHVJNo//YVX9d5JfAdckWQY8Dpzbxt8InAWsAf4CfGLSZy1J2q4Jw72qHgXePk79z8Bp49QLOH9SZidJ2ineoSpJHTLcJalDu3ITkwBeeAF+8QsYHYUXX5zu2UiaaT7wATj++El/WcN9Rz39NNxxx+Dx85/DPffAK68M9mW8S/wlaTve9CbDfVqsXfuPIL/jDvjtbwf1/feHxYvhi1+Ek04arB9wwPTOVZIaw32sKvjDH14d5o89Ntj3hjfAu98NH//4IMzf+U7Yb79pna4kbYvhvnYtXHfdP8J8Q7vR9tBD4T3vgc98ZrB829tg1qzpnaskDWnvDPdNm+Daa+H734fbbx8csb/lLfC+9w2C/KST4KijPIcuacbae8L9b3+Dm24aBPqqVYMrWxYuhK98BT76UTjyyOmeoSRNmr7DvQruvBOuvBKuvhr+/GeYMwc++cnBufPjj/foXFKX+gz3P/0JvvWtQag/8sjgypYlSwaBfvrpsM8+0z1DSZpS/YX788/DiSfC44/DKafAl74EH/rQ4GoXSdpL9BfuF1wwuALm9tsHvxyVpL1QX39b5rrr4Ior4KKLDHZJe7V+wv3JJ2H5cjjuOPjyl6d7NpI0rfoI96rBFTAvvDC41HHffad7RpI0rfo45/7tb8MNN8All8Axx0z3bCRp2s38I/dHHhn8iYDTToNPfWq6ZyNJe4SZHe4vvwznnQezZ8N3vwuvmdntSNJkmdmnZX74w8EXZVx5JcyfP92zkaQ9xsw+1H3yycHy7LOndx6StIeZ2eEuSRqX4S5JHTLcJalDhrskdchwl6QOGe6S1CHDXZI6ZLhLUocMd0nqkOEuSR0y3CWpQ0OHe5JZSe5JckPbPiLJXUkeTnJ1kn1bfb+2vabtXzA1U5ckbcuOHLlfADw0ZvvrwMVVtRB4FljW6suAZ6vqSODiNk6StBsNFe5J5gH/Cny7bQc4FfhJG7IS2PynGZe0bdr+09p4SdJuMuyR+yXA54FX2vYhwHNV9VLbXgfMbetzgbUAbf+mNl6StJtMGO5J3g9sqKq7x5bHGVpD7Bv7usuTjCYZ3bhx41CTlSQNZ5gj9xOBDyZ5DLiKwemYS4ADk2z+Jqd5wPq2vg6YD9D2vxF4ZssXraoVVbWoqhaNjIzsUhOSpFebMNyr6qKqmldVC4APA7dU1ceAW4Fz2rClwPVtfVXbpu2/paq2OnKXJE2dXbnO/QvAZ5OsYXBO/fJWvxw4pNU/C1y4a1OUJO2oHfqC7Kq6DbitrT8KnDDOmL8C507C3CRJO8k7VCWpQ4a7JHXIcJekDhnuktQhw12SOmS4S1KHDHdJ6pDhLkkdMtwlqUOGuyR1yHCXpA4Z7pLUIcNdkjpkuEtShwx3SeqQ4S5JHTLcJalDhrskdchwl6QOGe6S1CHDXZI6ZLhLUocMd0nqkOEuSR0y3CWpQ4a7JHXIcJekDhnuktQhw12SOmS4S1KHDHdJ6pDhLkkdMtwlqUMThnuS/ZP8MslvkjyY5KutfkSSu5I8nOTqJPu2+n5te03bv2BqW5AkbWmYI/cXgVOr6u3AscAZSRYDXwcurqqFwLPAsjZ+GfBsVR0JXNzGSZJ2ownDvQb+t23u0x4FnAr8pNVXAme39SVtm7b/tCSZtBlLkiY01Dn3JLOS3AtsAG4GHgGeq6qX2pB1wNy2PhdYC9D2bwIOGec1lycZTTK6cePGXetCkvQqQ4V7Vb1cVccC84ATgKPHG9aW4x2l11aFqhVVtaiqFo2MjAw7X0nSEHboapmqeg64DVgMHJhkdts1D1jf1tcB8wHa/jcCz0zGZCVJwxnmapmRJAe29dcC7wUeAm4FzmnDlgLXt/VVbZu2/5aq2urIXZI0dWZPPITDgZVJZjH4x+CaqrohyW+Bq5L8J3APcHkbfznw/SRrGByxf3gK5i1J2o4Jw72q7gPeMU79UQbn37es/xU4d1JmJ0naKd6hKkkdMtwlqUOGuyR1yHCXpA4Z7pLUIcNdkjpkuEtShwx3SeqQ4S5JHTLcJalDhrskdchwl6QOGe6S1CHDXZI6ZLhLUocMd0nqkOEuSR0y3CWpQ4a7JHXIcJekDhnuktQhw12SOmS4S1KHDHdJ6pDhLkkdMtwlqUOGuyR1yHCXpA4Z7pLUIcNdkjpkuEtShwx3SerQhOGeZH6SW5M8lOTBJBe0+sFJbk7ycFse1OpJcmmSNUnuS3LcVDchSXq1YY7cXwI+V1VHA4uB85McA1wIrK6qhcDqtg1wJrCwPZYDl036rCVJ2zVhuFfVE1X167b+AvAQMBdYAqxsw1YCZ7f1JcD3auBO4MAkh0/6zCVJ27RD59yTLADeAdwFHFZVT8DgHwDg0DZsLrB2zNPWtZokaTcZOtyTHABcC3y6qp7f3tBxajXO6y1PMppkdOPGjcNOQ5I0hKHCPck+DIL9B1X101Z+avPplrbc0OrrgPljnj4PWL/la1bViqpaVFWLRkZGdnb+kqRxDHO1TIDLgYeq6htjdq0Clrb1pcD1Y+rntatmFgObNp++kSTtHrOHGHMi8HHg/iT3ttq/A18DrkmyDHgcOLftuxE4C1gD/AX4xKTOWJI0oQnDvar+h/HPowOcNs74As7fxXlJknaBd6hKUocMd0nqkOEuSR0y3CWpQ4a7JHXIcJekDhnuktQhw12SOmS4S1KHDHdJ6pDhLkkdMtwlqUOGuyR1yHCXpA4Z7pLUIcNdkjpkuEtShwx3SeqQ4S5JHTLcJalDhrskdchwl6QOGe6S1CHDXZI6ZLhLUocMd0nqkOEuSR0y3CWpQ4a7JHXIcJekDhnuktQhw12SOjRhuCf5TpINSR4YUzs4yc1JHm7Lg1o9SS5NsibJfUmOm8rJS5LGN8yR+xXAGVvULgRWV9VCYHXbBjgTWNgey4HLJmeakqQdMWG4V9XPgWe2KC8BVrb1lcDZY+rfq4E7gQOTHD5Zk5UkDWdnz7kfVlVPALTloa0+F1g7Zty6VpMk7UaT/QvVjFOrcQcmy5OMJhnduHHjJE9DkvZuOxvuT20+3dKWG1p9HTB/zLh5wPrxXqCqVlTVoqpaNDIyspPTkCSNZ2fDfRWwtK0vBa4fUz+vXTWzGNi0+fSNJGn3mT3RgCQ/Ak4G5iRZB/wH8DXgmiTLgMeBc9vwG4GzgDXAX4BPTMGcJUkTmDDcq+oj29h12jhjCzh/VyclSdo13qEqSR0y3CWpQ4a7JHXIcJekDhnuktQhw12SOmS4S1KHDHdJ6pDhLkkdMtwlqUOGuyR1yHCXpA4Z7pLUIcNdkjpkuEtShwx3SeqQ4S5JHTLcJalDhrskdchwl6QOGe6S1CHDXZI6ZLhLUocMd0nqkOEuSR0y3CWpQ4a7JHXIcJekDhnuktQhw12SOmS4S1KHDHdJ6tCUhHuSM5L8PsmaJBdOxXtIkrZt0sM9ySzgv4AzgWOAjyQ5ZrLfR5K0bVNx5H4CsKaqHq2qvwFXAUum4H0kSdswFeE+F1g7Zntdq0mSdpOpCPeMU6utBiXLk4wmGd24cePOvdNRR8E558CsWTv3fEnq1FSE+zpg/pjtecD6LQdV1YqqWlRVi0ZGRnbunZYsgR//GPbff+eeL0mdmopw/xWwMMkRSfYFPgysmoL3kSRtw+zJfsGqeinJvwE3AbOA71TVg5P9PpKkbZv0cAeoqhuBG6fitSVJE/MOVUnqkOEuSR0y3CWpQ4a7JHXIcJekDqVqq5tHd/8kko3AH3fy6XOApydxOnuivaFH2Dv6tMc+7Ck9vqWqxr0LdI8I912RZLSqFk33PKbS3tAj7B192mMfZkKPnpaRpA4Z7pLUoR7CfcV0T2A32Bt6hL2jT3vswx7f44w/5y5J2loPR+6SpC3M6HDv5Yu4k3wnyYYkD4ypHZzk5iQPt+VBrZ4kl7ae70ty3PTNfHhJ5ie5NclDSR5MckGrd9Nnkv2T/DLJb1qPX231I5Lc1Xq8uv0pbJLs17bXtP0LpnP+OyLJrCT3JLmhbXfVY5LHktyf5N4ko602oz6rMzbcO/si7iuAM7aoXQisrqqFwOq2DYN+F7bHcuCy3TTHXfUS8LmqOhpYDJzf/nv11OeLwKlV9XbgWOCMJIuBrwMXtx6fBZa18cuAZ6vqSODiNm6muAB4aMx2jz2eUlXHjrnkcWZ9VqtqRj6AdwE3jdm+CLhouue1C/0sAB4Ys/174PC2fjjw+7b+TeAj442bSQ/geuBfeu0T+Cfg18A/M7jZZXar//1zy+A7D97V1me3cZnuuQ/R2zwG4XYqcAODr9bsrcfHgDlb1GbUZ3XGHrnT/xdxH1ZVTwC05aGtPuP7bj+avwO4i876bKcr7gU2ADcDjwDPVdVLbcjYPv7eY9u/CThk9854p1wCfB54pW0fQn89FvCzJHcnWd5qM+qzOiVf1rGbDPVF3B2a0X0nOQC4Fvh0VT2fjNfOYOg4tT2+z6p6GTg2yYHAdcDR4w1ryxnXY5L3Axuq6u4kJ28ujzN0xvbYnFhV65McCtyc5HfbGbtH9jiTj9yH+iLuGeypJIcDtOWGVp+xfSfZh0Gw/6CqftrK3fUJUFXPAbcx+P3CgUk2H0iN7ePvPbb9bwSe2b0z3WEnAh9M8hhwFYNTM5fQV49U1fq23MDgH+kTmGGf1Zkc7r1/EfcqYGlbX8rgHPXm+nntN/SLgU2bf1Tck2VwiH458FBVfWPMrm76TDLSjthJ8lrgvQx+6XgrcE4btmWPm3s/B7il2knbPVVVXVRV86pqAYP/526pqo/RUY9JXpfk9ZvXgdOBB5hpn9XpPum/i7/0OAv4A4Pzml+c7vnsQh8/Ap4A/o/BUcAyBuclVwMPt+XBbWwYXCX0CHA/sGi65z9kj+9m8KPqfcC97XFWT30CbwPuaT0+AHy51d8K/BJYA/wY2K/V92/ba9r+t053DzvY78nADb312Hr5TXs8uDlbZtpn1TtUJalDM/m0jCRpGwx3SeqQ4S5JHTLcJalDhrskdchwl6QOGe6S1CHDXZI69P9Kl7zAwjF0XwAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAD4CAYAAAAXUaZHAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAQlklEQVR4nO3df4xdZZ3H8ffXlh+CK4V2qLXT7GAYE0xcfmTEGjZGwTWAaEmECNGlMSVNFBWD0YXdZFfWRSQxgCQbsnUh1I1LQURpSLPQFIhsosgUSilb2Y4E7aSEjgtU118IfveP84x7aW+Z25k7c2eeeb+Sm3PO9zz33u8Ths+cPnN/RGYiSarLG3rdgCSp+wx3SaqQ4S5JFTLcJalChrskVWhhrxsAWLJkSQ4MDPS6DUmaU7Zu3fqLzOxrd25WhPvAwADDw8O9bkOS5pSI+NnBzrksI0kV6ijcI+LZiHgyIrZFxHCpHRcRmyNiV9keW+oRETdFxEhEbI+I06ZzApKkAx3Klfv7M/OUzBwqx1cCWzJzENhSjgHOAQbLbS1wc7ealSR1ZirLMquA9WV/PXB+S/1b2fgRsCgilk3heSRJh6jTcE/g/ojYGhFrS21pZj4HULbHl/pyYHfLfUdL7TUiYm1EDEfE8NjY2OS6lyS11emrZc7IzD0RcTywOSJ+8jpjo03tgE8ny8x1wDqAoaEhP71Mkrqooyv3zNxTtnuB7wGnA8+PL7eU7d4yfBRY0XL3fmBPtxqWJE1swiv3iDgaeENm/qrsfxD4R2AjsBr4WtneU+6yEfhMRGwA3g3sG1++mTMyYft22LQJfvvbXncjqWYf/jC8611df9hOlmWWAt+LiPHx/56Z/xERjwJ3RsQa4OfAhWX8JuBcYAT4DfDJrnc9XXbtgg0b4PbbYefOphbtVpkkqUve+tbehHtmPgOc3Kb+P8BZbeoJXNaV7mbC6CjccUcT6Fu3NrX3vhc+9zn46Eehr+07eyVpVpsVHz8w48bG4K67mqv0hx9ulmGGhuDrX4ePfQz6+3vdoSRNyfwK982b4frrm+2rr8JJJ8HVV8NFF8HgYK+7k6SumT/hvm0bnHceLF0KX/wiXHwxvPOdrqlLqtL8CPdf/7oJ88WL4bHHYMmSXnckSdNqfoT7FVfA0083yzEGu6R5oP6P/L37bli3rlmKOeuAF/dIUpXqDvfdu+HSS5tXwnzlK73uRpJmTL3h/uqr8IlPwMsvN69hP/zwXnckSTOm3jX3a6+FH/wAbrsNTjyx191I0oyq88r9hz+EL3+5eYXMJZf0uhtJmnH1hXsmfPazsHw53Hyzr2OXNC/Vtyxz//3NZ8R885twzDG97kaSeqK+K/drrmk+G8blGEnzWF1X7g8/3Ny+8Q1fHSNpXqvryv2aa5qP6L300l53Ikk9VU+4P/oo3Hdf81EDRx3V624kqafqCfevfhUWLYJPf7rXnUhSz9UR7jt2wPe/33x70pvf3OtuJKnn6gj3a6+Fo49uwl2SVEG4j4w0X5f3qU81n9cuSaog3K+7Dg47rPlDqiQJmOvhvns3rF8Pa9bAsmW97kaSZo25He533w1/+INX7ZK0n7kd7i+/3Gzf8pbe9iFJs8zcDndJUluGuyRVyHCXpAoZ7pJUIcNdkipkuEtShQx3SapQx+EeEQsi4vGIuLccnxARj0TEroi4IyIOL/UjyvFIOT8wPa1Lkg7mUK7cLwd2thxfB9yQmYPAi8CaUl8DvJiZJwI3lHGSpBnUUbhHRD/wIeBfy3EAZwJ3lSHrgfPL/qpyTDl/VhkvSZohnV653wh8CfhjOV4MvJSZr5TjUWB52V8O7AYo5/eV8ZKkGTJhuEfEecDezNzaWm4zNDs41/q4ayNiOCKGx8bGOmpWktSZTq7czwA+EhHPAhtolmNuBBZFxMIyph/YU/ZHgRUA5fwxwAv7P2hmrsvMocwc6uvrm9IkJEmvNWG4Z+ZVmdmfmQPARcADmflx4EHggjJsNXBP2d9YjinnH8jMA67cJUnTZyqvc/8b4IqIGKFZU7+l1G8BFpf6FcCVU2tRknSoFk485P9l5kPAQ2X/GeD0NmN+B1zYhd4kSZPkO1QlqUKGuyRVyHCXpAoZ7pJUIcNdkipkuEtShQx3SaqQ4S5JFTLcJalChrskVchwl6QKGe6SVCHDXZIqZLhLUoUMd0mqkOEuSRUy3CWpQoa7JFXIcJekChnuklQhw12SKmS4S1KFDHdJqpDhLkkVMtwlqUKGuyRVyHCXpAoZ7pJUIcNdkipkuEtShQx3SaqQ4S5JFZow3CPiyIj4cUQ8ERFPRcTVpX5CRDwSEbsi4o6IOLzUjyjHI+X8wPROQZK0v06u3H8PnJmZJwOnAGdHxErgOuCGzBwEXgTWlPFrgBcz80TghjJOkjSDJgz3bPxvOTys3BI4E7ir1NcD55f9VeWYcv6siIiudSxJmlBHa+4RsSAitgF7gc3AT4GXMvOVMmQUWF72lwO7Acr5fcDiNo+5NiKGI2J4bGxsarOQJL1GR+Gema9m5ilAP3A6cFK7YWXb7io9DyhkrsvMocwc6uvr67RfSVIHDunVMpn5EvAQsBJYFBELy6l+YE/ZHwVWAJTzxwAvdKNZSVJnOnm1TF9ELCr7bwQ+AOwEHgQuKMNWA/eU/Y3lmHL+gcw84MpdkjR9Fk48hGXA+ohYQPPL4M7MvDci/gvYEBH/BDwO3FLG3wL8W0SM0FyxXzQNfUuSXseE4Z6Z24FT29SfoVl/37/+O+DCrnQnSZoU36EqSRUy3CWpQoa7JFXIcJekChnuklQhw12SKmS4S1KFDHdJqpDhLkkVMtwlqUKGuyRVyHCXpAoZ7pJUIcNdkipkuEtShQx3SaqQ4S5JFTLcJalChrskVchwl6QKGe6SVCHDXZIqZLhLUoUMd0mqkOEuSRUy3CWpQoa7JFXIcJekChnuklQhw12SKmS4S1KFJgz3iFgREQ9GxM6IeCoiLi/14yJic0TsKttjSz0i4qaIGImI7RFx2nRPQpL0Wp1cub8CfCEzTwJWApdFxDuAK4EtmTkIbCnHAOcAg+W2Fri5611Lkl7XhOGemc9l5mNl/1fATmA5sApYX4atB84v+6uAb2XjR8CiiFjW9c4lSQd1SGvuETEAnAo8AizNzOeg+QUAHF+GLQd2t9xttNT2f6y1ETEcEcNjY2OH3rkk6aA6DveIeBPwXeDzmfnL1xvappYHFDLXZeZQZg719fV12oYkqQMdhXtEHEYT7N/OzLtL+fnx5Zay3Vvqo8CKlrv3A3u6064kqROdvFomgFuAnZl5fcupjcDqsr8auKelfkl51cxKYN/48o0kaWYs7GDMGcBfA09GxLZS+1vga8CdEbEG+DlwYTm3CTgXGAF+A3yyqx1LkiY0Ybhn5n/Sfh0d4Kw24xO4bIp9SZKmwHeoSlKFDHdJqpDhLkkVMtwlqUKGuyRVyHCXpAoZ7pJUIcNdkipkuEtShQx3SaqQ4S5JFTLcJalChrskVchwl6QKGe6SVCHDXZIqZLhLUoUMd0mqkOEuSRUy3CWpQoa7JFXIcJekChnuklQhw12SKmS4S1KFDHdJqpDhLkkVMtwlqUKGuyRVyHCXpAoZ7pJUoQnDPSJujYi9EbGjpXZcRGyOiF1le2ypR0TcFBEjEbE9Ik6bzuYlSe11cuV+G3D2frUrgS2ZOQhsKccA5wCD5bYWuLk7bUqSDsWE4Z6ZPwBe2K+8Clhf9tcD57fUv5WNHwGLImJZt5qVJHVmsmvuSzPzOYCyPb7UlwO7W8aNlpokaQZ1+w+q0aaWbQdGrI2I4YgYHhsb63IbkjS/TTbcnx9fbinbvaU+CqxoGdcP7Gn3AJm5LjOHMnOor69vkm1IktqZbLhvBFaX/dXAPS31S8qrZlYC+8aXbyRJM2fhRAMi4nbgfcCSiBgF/gH4GnBnRKwBfg5cWIZvAs4FRoDfAJ+chp4lSROYMNwz8+KDnDqrzdgELptqU5KkqfEdqpJUIcNdkipkuEtShQx3SaqQ4S5JFTLcJalChrskVchwl6QKGe6SVCHDXZIqZLhLUoUMd0mqkOEuSRUy3CWpQoa7JFXIcJekChnuklQhw12SKmS4S1KFDHdJqpDhLkkVMtwlqUKGuyRVyHCXpAoZ7pJUIcNdkipkuEtShQx3SaqQ4S5JFTLcJalChrskVchwl6QKTUu4R8TZEfF0RIxExJXT8RySpIPrerhHxALgn4FzgHcAF0fEO7r9PJKkg5uOK/fTgZHMfCYzXwY2AKum4XkkSQcxHeG+HNjdcjxaaq8REWsjYjgihsfGxib3TG9/O1xwASxYMLn7S1KlpiPco00tDyhkrsvMocwc6uvrm9wzrVoF3/kOHHnk5O4vSZWajnAfBVa0HPcDe6bheSRJBzEd4f4oMBgRJ0TE4cBFwMZpeB5J0kEs7PYDZuYrEfEZ4D5gAXBrZj7V7eeRJB1c18MdIDM3AZum47ElSRPzHaqSVCHDXZIqZLhLUoUMd0mqUGQe8P6imW8iYgz42STvvgT4RRfbma3myzxh/szVedalF/P888xs+y7QWRHuUxERw5k51Os+ptt8mSfMn7k6z7rMtnm6LCNJFTLcJalCNYT7ul43MEPmyzxh/szVedZlVs1zzq+5S5IOVMOVuyRpP4a7JFVoTod7TV/EHRG3RsTeiNjRUjsuIjZHxK6yPbbUIyJuKvPeHhGn9a7zQxMRKyLiwYjYGRFPRcTlpV7VXCPiyIj4cUQ8UeZ5damfEBGPlHneUT4Wm4g4ohyPlPMDvez/UEXEgoh4PCLuLce1zvPZiHgyIrZFxHCpzcqf3Tkb7hV+EfdtwNn71a4EtmTmILClHEMz58FyWwvcPEM9dsMrwBcy8yRgJXBZ+e9W21x/D5yZmScDpwBnR8RK4DrghjLPF4E1Zfwa4MXMPBG4oYybSy4HdrYc1zpPgPdn5iktr2mfnT+7mTknb8B7gPtajq8Crup1X1Oc0wCwo+X4aWBZ2V8GPF32/wW4uN24uXYD7gH+qua5AkcBjwHvpnkH48JS/9PPMM33H7yn7C8s46LXvXc4v36aUDsTuJfmqzarm2fp+VlgyX61WfmzO2ev3Onwi7jnuKWZ+RxA2R5f6lXMvfyT/FTgESqca1mq2AbsBTYDPwVeysxXypDWufxpnuX8PmDxzHY8aTcCXwL+WI4XU+c8ofk+6PsjYmtErC21WfmzOy1f1jFDOvoi7krN+blHxJuA7wKfz8xfRrSbUjO0TW1OzDUzXwVOiYhFwPeAk9oNK9s5Oc+IOA/Ym5lbI+J94+U2Q+f0PFuckZl7IuJ4YHNE/OR1xvZ0rnP5yn0+fBH38xGxDKBs95b6nJ57RBxGE+zfzsy7S7nKuQJk5kvAQzR/Y1gUEeMXVa1z+dM8y/ljgBdmttNJOQP4SEQ8C2ygWZq5kfrmCUBm7inbvTS/sE9nlv7szuVwnw9fxL0RWF32V9OsT4/XLyl/jV8J7Bv/Z+FsF80l+i3Azsy8vuVUVXONiL5yxU5EvBH4AM0fHB8ELijD9p/n+PwvAB7IslA7m2XmVZnZn5kDNP8PPpCZH6eyeQJExNER8Wfj+8AHgR3M1p/dXv+BYop/3DgX+G+atcy/63U/U5zL7cBzwB9ofuOvoVmL3ALsKtvjytigeaXQT4EngaFe938I8/xLmn+abge2ldu5tc0V+Avg8TLPHcDfl/rbgB8DI8B3gCNK/chyPFLOv63Xc5jEnN8H3FvrPMucnii3p8YzZ7b+7PrxA5JUobm8LCNJOgjDXZIqZLhLUoUMd0mqkOEuSRUy3CWpQoa7JFXo/wCLi64nYshLIQAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "import matplotlib.pyplot as plt\n", "\n", "# Not ordered\n", "for c in range(3):\n", " plt.plot(roc_curves[c][0], roc_curves[c][1], color = \"red\")\n", " plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.5" } }, "nbformat": 4, "nbformat_minor": 1 }