{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# # Classifiers introduction\n",
"\n",
"In the following program we introduce the basic steps of classification of a dataset in a matrix"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Import the package for learning and modeling trees"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"from sklearn import tree"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Define the matrix containing the data (one example per row)\n",
"and the vector containing the corresponding target value"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"X = [[0, 0, 0], [1, 1, 1], [0, 1, 0], [0, 0, 1], [1, 1, 0], [1, 0, 1]]\n",
"Y = [1, 0, 0, 0, 1, 1]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Declare the classification model you want to use and then fit the model to the data"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"clf = tree.DecisionTreeClassifier()\n",
"clf = clf.fit(X, Y)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Predict the target value (and print it) for the passed data, using the fitted model currently in clf"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[0]\n"
]
}
],
"source": [
"print(clf.predict([[0, 1, 1]]))"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[1 0]\n"
]
}
],
"source": [
"print(clf.predict([[1, 0, 1],[0, 0, 1]]))"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
"\r\n",
"\r\n",
"\r\n",
"\r\n",
"\r\n"
],
"text/plain": [
""
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import os\n",
"os.environ[\"PATH\"] += os.pathsep + 'C:/Users/galat/.conda/envs/aaut/Library/bin/graphviz'\n",
"import graphviz\n",
"dot_data = tree.export_graphviz(clf, out_file=None) \n",
"graph = graphviz.Source(dot_data) \n",
"graph"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the following we start using a dataset (from UCI Machine Learning repository)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.datasets import load_iris\n",
"iris = load_iris()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Declare the type of prediction model and the working criteria for the model induction algorithm"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"clf = tree.DecisionTreeClassifier(criterion=\"entropy\",random_state=300,min_samples_leaf=5,class_weight={0:1,1:1,2:1})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Split the dataset in training and test set"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"# Generate a random permutation of the indices of examples that will be later used \n",
"# for the training and the test set\n",
"import numpy as np\n",
"np.random.seed(1231)\n",
"indices = np.random.permutation(len(iris.data))\n",
"\n",
"# We now decide to keep the last 10 indices for test set, the remaining for the training set\n",
"indices_training=indices[:-10]\n",
"indices_test=indices[-10:]\n",
"\n",
"iris_X_train = iris.data[indices_training] # keep for training all the matrix elements with the exception of the last 10 \n",
"iris_y_train = iris.target[indices_training]\n",
"iris_X_test = iris.data[indices_test] # keep the last 10 elements for test set\n",
"iris_y_test = iris.target[indices_test]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Fit the learning model on training set"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"# fit the model to the training data\n",
"clf = clf.fit(iris_X_train, iris_y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Obtain predictions"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Predictions:\n",
"[0 0 0 1 0 0 1 2 0 0]\n",
"True classes:\n",
"[0 0 0 2 0 0 1 1 0 0]\n",
"['setosa' 'versicolor' 'virginica']\n"
]
}
],
"source": [
"# apply fitted model \"clf\" to the test set \n",
"predicted_y_test = clf.predict(iris_X_test)\n",
"\n",
"# print the predictions (class numbers associated to classes names in target names)\n",
"print(\"Predictions:\")\n",
"print(predicted_y_test)\n",
"print(\"True classes:\")\n",
"print(iris_y_test) \n",
"print(iris.target_names)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Print the index of the test instances and the corresponding predictions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Look at the specific examples"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Instance # 33: \n",
"sepal length (cm)=5.5, sepal width (cm)=4.2, petal length (cm)=1.4, petal width (cm)=0.2\n",
"Predicted: setosa\t True: setosa\n",
"\n",
"Instance # 2: \n",
"sepal length (cm)=4.7, sepal width (cm)=3.2, petal length (cm)=1.3, petal width (cm)=0.2\n",
"Predicted: setosa\t True: setosa\n",
"\n",
"Instance # 11: \n",
"sepal length (cm)=4.8, sepal width (cm)=3.4, petal length (cm)=1.6, petal width (cm)=0.2\n",
"Predicted: setosa\t True: setosa\n",
"\n",
"Instance # 126: \n",
"sepal length (cm)=6.2, sepal width (cm)=2.8, petal length (cm)=4.8, petal width (cm)=1.8\n",
"Predicted: versicolor\t True: virginica\n",
"\n",
"Instance # 49: \n",
"sepal length (cm)=5.0, sepal width (cm)=3.3, petal length (cm)=1.4, petal width (cm)=0.2\n",
"Predicted: setosa\t True: setosa\n",
"\n",
"Instance # 10: \n",
"sepal length (cm)=5.4, sepal width (cm)=3.7, petal length (cm)=1.5, petal width (cm)=0.2\n",
"Predicted: setosa\t True: setosa\n",
"\n",
"Instance # 85: \n",
"sepal length (cm)=6.0, sepal width (cm)=3.4, petal length (cm)=4.5, petal width (cm)=1.6\n",
"Predicted: versicolor\t True: versicolor\n",
"\n",
"Instance # 52: \n",
"sepal length (cm)=6.9, sepal width (cm)=3.1, petal length (cm)=4.9, petal width (cm)=1.5\n",
"Predicted: virginica\t True: versicolor\n",
"\n",
"Instance # 5: \n",
"sepal length (cm)=5.4, sepal width (cm)=3.9, petal length (cm)=1.7, petal width (cm)=0.4\n",
"Predicted: setosa\t True: setosa\n",
"\n",
"Instance # 21: \n",
"sepal length (cm)=5.1, sepal width (cm)=3.7, petal length (cm)=1.5, petal width (cm)=0.4\n",
"Predicted: setosa\t True: setosa\n",
"\n"
]
}
],
"source": [
"for i in range(len(iris_y_test)): \n",
" print(\"Instance # \"+str(indices_test[i])+\": \")\n",
" s=\"\"\n",
" for j in range(len(iris.feature_names)):\n",
" s=s+iris.feature_names[j]+\"=\"+str(iris_X_test[i][j])\n",
" if (j\r\n",
"\r\n",
"\r\n",
"\r\n",
"\r\n"
],
"text/plain": [
""
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dot_data = tree.export_graphviz(clf, out_file=None, \n",
" feature_names=iris.feature_names, \n",
" class_names=iris.target_names, \n",
" filled=True, rounded=True, \n",
" special_characters=True) \n",
"graph = graphviz.Source(dot_data) \n",
"graph"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 1. Artificial inflation"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# Generate a random permutation of the indices of examples that will be later used \n",
"# for the training and the test set\n",
"import numpy as np\n",
"np.random.seed(1231)\n",
"indices = np.random.permutation(len(iris.data))\n",
"\n",
"# We now decide to keep the last 10 indices for test set, the remaining for the training set\n",
"indices_training=indices[:-10]\n",
"indices_test=indices[-10:]\n",
"\n",
"iris_X_train = iris.data[indices_training] # keep for training all the matrix elements with the exception of the last 10 \n",
"iris_y_train = iris.target[indices_training]\n",
"iris_X_test = iris.data[indices_test] # keep the last 10 elements for test set\n",
"iris_y_test = iris.target[indices_test]"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"samples_x = []\n",
"samples_y = []\n",
"for i in range(0, len(iris_y_train)):\n",
" if iris_y_train[i] == 1:\n",
" for _ in range(9):\n",
" samples_x.append(iris_X_train[i])\n",
" samples_y.append(1)\n",
" elif iris_y_train[i] == 2:\n",
" for _ in range(9):\n",
" samples_x.append(iris_X_train[i])\n",
" samples_y.append(2)\n",
"\n",
"#Samples inflation\n",
"iris_X_train = np.append(iris_X_train, samples_x, axis = 0)\n",
"iris_y_train = np.append(iris_y_train, samples_y, axis = 0)"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Accuracy: 1.0\n",
"F1: 1.0\n"
]
}
],
"source": [
"clf = tree.DecisionTreeClassifier(criterion=\"entropy\",random_state=300,min_samples_leaf=10,class_weight={0:1,1:1,2:1})\n",
"clf = clf.fit(iris_X_train, iris_y_train)\n",
"predicted_y_test = clf.predict(iris_X_test)\n",
"acc_score = accuracy_score(iris_y_test, predicted_y_test)\n",
"f1 = f1_score(iris_y_test, predicted_y_test, average='macro')\n",
"print(\"Accuracy: \", acc_score)\n",
"print(\"F1: \", f1)"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
"\r\n",
"\r\n",
"\r\n",
"\r\n",
"\r\n"
],
"text/plain": [
""
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dot_data = tree.export_graphviz(clf, out_file=None, \n",
" feature_names=iris.feature_names, \n",
" class_names=iris.target_names, \n",
" filled=True, rounded=True, \n",
" special_characters=True) \n",
"graph = graphviz.Source(dot_data) \n",
"graph"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 2. Class weights"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [],
"source": [
"# Generate a random permutation of the indices of examples that will be later used \n",
"# for the training and the test set\n",
"import numpy as np\n",
"np.random.seed(1231)\n",
"indices = np.random.permutation(len(iris.data))\n",
"\n",
"# We now decide to keep the last 10 indices for test set, the remaining for the training set\n",
"indices_training=indices[:-10]\n",
"indices_test=indices[-10:]\n",
"\n",
"iris_X_train = iris.data[indices_training] # keep for training all the matrix elements with the exception of the last 10 \n",
"iris_y_train = iris.target[indices_training]\n",
"iris_X_test = iris.data[indices_test] # keep the last 10 elements for test set\n",
"iris_y_test = iris.target[indices_test]"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Accuracy: 0.8\n",
"F1: 0.5\n"
]
}
],
"source": [
"clf = tree.DecisionTreeClassifier(criterion=\"entropy\",random_state=300,min_samples_leaf=5,class_weight={0:1,1:10,2:10})\n",
"clf = clf.fit(iris_X_train, iris_y_train)\n",
"predicted_y_test = clf.predict(iris_X_test)\n",
"acc_score = accuracy_score(iris_y_test, predicted_y_test)\n",
"f1 = f1_score(iris_y_test, predicted_y_test, average='macro')\n",
"print(\"Accuracy: \", acc_score)\n",
"print(\"F1: \", f1)"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
"\r\n",
"\r\n",
"\r\n",
"\r\n",
"\r\n"
],
"text/plain": [
""
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dot_data = tree.export_graphviz(clf, out_file=None, \n",
" feature_names=iris.feature_names, \n",
" class_names=iris.target_names, \n",
" filled=True, rounded=True, \n",
" special_characters=True) \n",
"graph = graphviz.Source(dot_data) \n",
"graph"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 3. Avoid overfitting"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"# Generate a random permutation of the indices of examples that will be later used \n",
"# for the training and the test set\n",
"import numpy as np\n",
"np.random.seed(1231)\n",
"indices = np.random.permutation(len(iris.data))\n",
"\n",
"# We now decide to keep the last 10 indices for test set, the remaining for the training set\n",
"indices_training=indices[:-10]\n",
"indices_test=indices[-10:]\n",
"\n",
"iris_X_train = iris.data[indices_training] # keep for training all the matrix elements with the exception of the last 10 \n",
"iris_y_train = iris.target[indices_training]\n",
"iris_X_test = iris.data[indices_test] # keep the last 10 elements for test set\n",
"iris_y_test = iris.target[indices_test]"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Accuracy: 1.0\n",
"F1: 1.0\n"
]
}
],
"source": [
"clf = tree.DecisionTreeClassifier(criterion=\"entropy\",random_state=300,min_samples_leaf=3,class_weight={0:1,1:10,2:10}, min_impurity_decrease = 0.005, max_depth = 4, max_leaf_nodes = 6)\n",
"clf = clf.fit(iris_X_train, iris_y_train)\n",
"predicted_y_test = clf.predict(iris_X_test)\n",
"acc_score = accuracy_score(iris_y_test, predicted_y_test)\n",
"f1 = f1_score(iris_y_test, predicted_y_test, average='macro')\n",
"print(\"Accuracy: \", acc_score)\n",
"print(\"F1: \", f1)"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
"\r\n",
"\r\n",
"\r\n",
"\r\n",
"\r\n"
],
"text/plain": [
""
]
},
"execution_count": 30,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dot_data = tree.export_graphviz(clf, out_file=None, \n",
" feature_names=iris.feature_names, \n",
" class_names=iris.target_names, \n",
" filled=True, rounded=True, \n",
" special_characters=True) \n",
"graph = graphviz.Source(dot_data) \n",
"graph"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 4. Confusion Matrix"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {
"scrolled": true
},
"outputs": [
{
"data": {
"text/plain": [
"array([[7, 0, 0],\n",
" [0, 2, 0],\n",
" [0, 0, 1]])"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# initializes the confusion matrix\n",
"confusion = np.zeros([3, 3], dtype = int)\n",
"\n",
"# print the corresponding instances indexes and class names\n",
"for i in range(len(iris_y_test)): \n",
" #increments the indexed cell value\n",
" confusion[iris_y_test[i], predicted_y_test[i]]+=1\n",
"confusion"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 5. ROC Curves"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[(0.0, 43.0), (30.0, 0.0), (30.0, 0.0), (40.0, 0.0), (430.0, 0.0), (440.0, 0.0)], [(0.0, 440.0), (10.0, 20.0), (20.0, 10.0), (30.0, 10.0), (43.0, 0.0), (430.0, 0.0)], [(0.0, 430.0), (10.0, 30.0), (10.0, 20.0), (20.0, 10.0), (43.0, 0.0), (440.0, 0.0)]]\n"
]
},
{
"data": {
"text/plain": [
"[[[0, 0.0, 30.0, 60.0, 100.0, 530.0, 970.0],\n",
" [0, 43.0, 43.0, 43.0, 43.0, 43.0, 43.0]],\n",
" [[0, 0.0, 10.0, 30.0, 60.0, 103.0, 533.0],\n",
" [0, 440.0, 460.0, 470.0, 480.0, 480.0, 480.0]],\n",
" [[0, 0.0, 10.0, 20.0, 40.0, 83.0, 523.0],\n",
" [0, 430.0, 460.0, 480.0, 490.0, 490.0, 490.0]]]"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Calculates the ROC curves (x, y)\n",
"leafs = []\n",
"class_pairs = [[],[],[]]\n",
"roc_curves = [[[0], [0]], [[0], [0]], [[0], [0]]]\n",
"for i in range(clf.tree_.node_count):\n",
" if (clf.tree_.feature[i] == -2):\n",
" leafs.append(i)\n",
"\n",
"# c = class index\n",
"for leaf in leafs:\n",
" for c in range(3):\n",
" #pairs(neg, pos)\n",
" class_pairs[c].append((clf.tree_.value[leaf][0].sum() - clf.tree_.value[leaf][0][c], clf.tree_.value[leaf][0][c]))\n",
"\n",
"#pairs sorting\n",
"for c in range(3):\n",
" class_pairs[c] = sorted(class_pairs[c], key=lambda t: t[0]/max(1,t[1]))\n",
"print(class_pairs)\n",
"\n",
"for i in range(1, len(leafs) + 1):\n",
" for c in range(3):\n",
" roc_curves[c][0].append(class_pairs[c][i - 1][0] + roc_curves[c][0][i - 1])\n",
" roc_curves[c][1].append(class_pairs[c][i - 1][1] + roc_curves[c][1][i - 1])\n",
"\n",
"roc_curves"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAD4CAYAAAAXUaZHAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAM3UlEQVR4nO3dX4yl9V3H8fdHloK2qUAZyMqCC3FTISZdmgmCeGGgVMSmcIEJpNGNbrI3NVJtUkEvmiZelMSU1cQ03RTsxpQ/lRIhpJGQLcSYGOogSKFb3IVauoLskEKrXmixXy/Os3Tc7jLnnDmzs/Od9yuZzHme8xzO73d+5D3PPDOzJ1WFJKmXn1jrAUiSZs+4S1JDxl2SGjLuktSQcZekhjadyCc7++yza+vWrSfyKSVp3XvyySdfq6q5SR5zQuO+detWFhYWTuRTStK6l+Tbkz7GyzKS1JBxl6SGjLskNWTcJakh4y5JDRl3SWrIuEtSQyf099yntmcP3H33Wo9CkqazfTvs3n1Cn3J9nLnffTc8/fRaj0KS1o31ceYOo698jz++1qOQpHVhfZy5S5ImYtwlqSHjLkkNGXdJasi4S1JDxl2SGjLuktSQcZekhoy7JDU0dtyTnJLkqSQPD9sXJnkiyYEk9yV5x+oNU5I0iUnO3G8B9i/Zvh24o6q2Aa8DO2c5MEnS9MaKe5ItwK8Dnx+2A1wF3D8cshe4YTUGKEma3Lhn7ruBTwA/HLbfA7xRVW8O24eA8471wCS7kiwkWVhcXFzRYCVJ41k27kk+BByuqieX7j7GoXWsx1fVnqqar6r5ubm5KYcpSZrEOP/k75XAh5NcB5wOvJvRmfwZSTYNZ+9bgJdXb5iSpEkse+ZeVbdV1Zaq2grcBHy1qj4CPAbcOBy2A3hw1UYpSZrISn7P/Q+BP0hykNE1+DtnMyRJ0kpN9E5MVfU48Phw+0XgstkPSZK0Uv6FqiQ1ZNwlqSHjLkkNGXdJasi4S1JDxl2SGjLuktSQcZekhoy7JDVk3CWpIeMuSQ0Zd0lqyLhLUkPGXZIaMu6S1JBxl6SGjLskNWTcJakh4y5JDRl3SWrIuEtSQ8Zdkhoy7pLUkHGXpIaMuyQ1ZNwlqSHjLkkNGXdJasi4S1JDxl2SGjLuktSQcZekhoy7JDVk3CWpIeMuSQ0Zd0lqyLhLUkPLxj3J6Um+luSfkzyX5FPD/guTPJHkQJL7krxj9YcrSRrHOGfu/w1cVVXvA7YD1ya5HLgduKOqtgGvAztXb5iSpEksG/ca+c9h89Tho4CrgPuH/XuBG1ZlhJKkiY11zT3JKUmeBg4DjwIvAG9U1ZvDIYeA847z2F1JFpIsLC4uzmLMkqRljBX3qvrfqtoObAEuAy4+1mHHeeyeqpqvqvm5ubnpRypJGttEvy1TVW8AjwOXA2ck2TTctQV4ebZDkyRNa5zflplLcsZw+yeBDwD7gceAG4fDdgAPrtYgJUmT2bT8IWwG9iY5hdEXgy9V1cNJvgHcm+RPgKeAO1dxnJKkCSwb96p6Brj0GPtfZHT9XZJ0kvEvVCWpIeMuSQ0Zd0lqyLhLUkPGXZIaMu6S1JBxl6SGjLskNWTcJakh4y5JDRl3SWrIuEtSQ8Zdkhoy7pLUkHGXpIaMuyQ1ZNwlqSHjLkkNGXdJasi4S1JDxl2SGjLuktSQcZekhoy7JDVk3CWpIeMuSQ0Zd0lqyLhLUkPGXZIaMu6S1JBxl6SGjLskNWTcJakh4y5JDRl3SWrIuEtSQ8ZdkhpaNu5Jzk/yWJL9SZ5Lcsuw/6wkjyY5MHw+c/WHK0kaxzhn7m8CH6+qi4HLgY8muQS4FdhXVduAfcO2JOkksGzcq+qVqvqn4fZ/APuB84Drgb3DYXuBG1ZrkJKkyUx0zT3JVuBS4Ang3Kp6BUZfAIBzjvOYXUkWkiwsLi6ubLSSpLGMHfck7wK+DHysqr4/7uOqak9VzVfV/Nzc3DRjlCRNaKy4JzmVUdi/WFUPDLtfTbJ5uH8zcHh1hihJmtQ4vy0T4E5gf1V9ZsldDwE7hts7gAdnPzxJ0jQ2jXHMlcBvAl9P8vSw74+ATwNfSrITeAn4jdUZoiRpUsvGvar+Hshx7r56tsORJM2Cf6EqSQ0Zd0lqyLhLUkPGXZIaMu6S1JBxl6SGjLskNWTcJakh4y5JDRl3SWrIuEtSQ8Zdkhoy7pLUkHGXpIaMuyQ1ZNwlqSHjLkkNGXdJasi4S1JDxl2SGjLuktSQcZekhoy7JDVk3CWpIeMuSQ0Zd0lqyLhLUkPGXZIaMu6S1JBxl6SGjLskNWTcJakh4y5JDRl3SWrIuEtSQ8Zdkhoy7pLU0LJxT3JXksNJnl2y76wkjyY5MHw+c3WHKUmaxDhn7l8Arj1q363AvqraBuwbtiVJJ4ll415Vfwd896jd1wN7h9t7gRtmPC5J0gpMe8393Kp6BWD4fM7xDkyyK8lCkoXFxcUpn06SNIlV/4FqVe2pqvmqmp+bm1vtp5MkMX3cX02yGWD4fHh2Q5IkrdS0cX8I2DHc3gE8OJvhSJJmYZxfhbwH+AfgvUkOJdkJfBq4JskB4JphW5J0kti03AFVdfNx7rp6xmORJM2If6EqSQ0Zd0lqyLhLUkPGXZIaMu6S1JBxl6SGjLskNWTcJakh4y5JDRl3SWrIuEtSQ8Zdkhoy7pLUkHGXpIaMuyQ1ZNwlqSHjLkkNGXdJasi4S1JDxl2SGjLuktSQcZekhoy7JDVk3CWpIeMuSQ0Zd0lqyLhLUkPGXZIaMu6S1JBxl6SGjLskNWTcJakh4y5JDRl3SWrIuEtSQ8Zdkhoy7pLU0IrinuTaJM8nOZjk1lkNSpK0MlPHPckpwF8AvwZcAtyc5JJZDUySNL2VnLlfBhysqher6n+Ae4HrZzMsSdJKbFrBY88DvrNk+xDwi0cflGQXsAvgggsumO6Ztm+f7nGStEGtJO45xr76sR1Ve4A9APPz8z92/1h2757qYZK0Ua3ksswh4Pwl21uAl1c2HEnSLKwk7v8IbEtyYZJ3ADcBD81mWJKklZj6skxVvZnkd4FHgFOAu6rquZmNTJI0tZVcc6eqvgJ8ZUZjkSTNiH+hKkkNGXdJasi4S1JDxl2SGkrVdH9XNNWTJYvAt6d8+NnAazMcznri3DeejTpvcO7HmvvPVtXcJP+hExr3lUiyUFXzaz2OteDcN97cN+q8wbnPau5elpGkhoy7JDW0nuK+Z60HsIac+8azUecNzn0m1s01d0nS+NbTmbskaUzGXZIaWhdx7/xG3EnOT/JYkv1Jnktyy7D/rCSPJjkwfD5z2J8kfz68Fs8kef/azmDlkpyS5KkkDw/bFyZ5Ypj7fcM/KU2S04btg8P9W9dy3CuV5Iwk9yf55rD+V2yEdU/y+8P/688muSfJ6V3XPMldSQ4neXbJvonXOMmO4fgDSXaM89wnfdw3wBtxvwl8vKouBi4HPjrM71ZgX1VtA/YN2zB6HbYNH7uAz574Ic/cLcD+Jdu3A3cMc38d2Dns3wm8XlU/B9wxHLee/Rnwt1X188D7GL0Grdc9yXnA7wHzVfULjP658Jvou+ZfAK49at9Ea5zkLOCTjN7G9DLgk0e+ILytqjqpP4ArgEeWbN8G3LbW41rF+T4IXAM8D2we9m0Gnh9ufw64ecnxbx23Hj8YvYPXPuAq4GFGb9/4GrDp6PVn9N4BVwy3Nw3HZa3nMOW83w186+jxd193fvTey2cNa/gw8Kud1xzYCjw77RoDNwOfW7L//x13vI+T/sydY78R93lrNJZVNXzLeSnwBHBuVb0CMHw+Zzis2+uxG/gE8MNh+z3AG1X15rC9dH5vzX24/3vD8evRRcAi8JfDJanPJ3knzde9qv4N+FPgJeAVRmv4JBtjzY+YdI2nWvv1EPex3oh7vUvyLuDLwMeq6vtvd+gx9q3L1yPJh4DDVfXk0t3HOLTGuG+92QS8H/hsVV0K/Bc/+vb8WFrMfbiccD1wIfAzwDsZXY44Wsc1X87x5jrVa7Ae4t7+jbiTnMoo7F+sqgeG3a8m2Tzcvxk4POzv9HpcCXw4yb8C9zK6NLMbOCPJkXcJWzq/t+Y+3P/TwHdP5IBn6BBwqKqeGLbvZxT77uv+AeBbVbVYVT8AHgB+iY2x5kdMusZTrf16iHvrN+JOEuBOYH9VfWbJXQ8BR34qvoPRtfgj+39r+Mn65cD3jnyLt95U1W1VtaWqtjJa169W1UeAx4Abh8OOnvuR1+TG4fh1eRZXVf8OfCfJe4ddVwPfoP+6vwRcnuSnhv/3j8y7/ZovMekaPwJ8MMmZw3c+Hxz2vb21/mHDmD+QuA74F+AF4I/XejwzntsvM/oW6xng6eHjOkbXFfcBB4bPZw3Hh9FvD70AfJ3Rbx2s+Txm8Dr8CvDwcPsi4GvAQeCvgdOG/acP2weH+y9a63GvcM7bgYVh7f8GOHMjrDvwKeCbwLPAXwGndV1z4B5GP1v4AaMz8J3TrDHwO8NrcBD47XGe239+QJIaWg+XZSRJEzLuktSQcZekhoy7JDVk3CWpIeMuSQ0Zd0lq6P8Ag0s1ouK5vTQAAAAASUVORK5CYII=\n",
"text/plain": [
"