11. Árboles de decisión#

Librerías:

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.tree import plot_tree
import matplotlib.pyplot as plt
from sklearn.preprocessing import OneHotEncoder

Para ilustrar el proceso de construcción del árbol, consideremos un ejemplo simple. Imagine que está trabajando para un estudio de cine de Hollywood, y su escritorio está repleto de guiones. En lugar de leer cada uno de principio a fin, usted decide desarrollar un algoritmo de árbol de decisión para predecir si una película potencial podría clasificarse en una de tres categorías:

  • impacto mainstream,

  • amado por la crítica (critic’s choice) o

  • fracaso de taquilla (box office bust).

Después de revisar los datos de 30 guiones de películas diferentes, surge un patrón. Parece haber una relación entre el presupuesto de rodaje propuesto por la película, el número de celebridades A para los papeles protagónicos y las categorías de éxito.

_images/im12.png

Para construir un árbol de decisión simple usando esta información, podemos aplicar una estrategia de dividir y vencer.

_images/im22.png

Resultado:

_images/im3.png

Es posible que hayas notado que las líneas diagonales podrían haber dividido los datos aún más limpiamente. Esta es una limitación de del árbol de decisiones, que utiliza divisiones paralelas a los ejes. El hecho de que cada división considere una característica a la vez evita que el árbol de decisiones forme decisiones más complejas, como “si el número de celebridades es mayor que el presupuesto estimado, entonces será un éxito crítico”.

Entonces, ¿qué es un árbol de decisión?

  • El modelo en sí mismo comprende una serie de decisiones lógicas, similares a un diagrama de flujo, con nodos de decisión que indican una decisión sobre un atributo. Estos se dividen en ramas que indican las elecciones de la decisión. El árbol termina con nodos de hoja o leaf nodes (también conocidos como nodos terminales) que denotan el resultado de seguir una combinación de decisiones.

Fortalezas

Debilidades

Un clasificador multiuso que funciona bien en la mayoría de los problemas

Los modelos de árbol de decisión a menudo están sesgados hacia divisiones en características que tienen una gran cantidad de niveles

El proceso de aprendizaje altamente automático puede manejar características numéricas o nominales, datos faltantes

Es fácil sobreajustar o ajustar el modelo

Utiliza solo las características más importantes

Puede tener problemas para modelar algunas relaciones debido a la dependencia de divisiones paralelas al eje

Se puede usar en datos con relativamente pocos ejemplos de entrenamiento o un número muy grande

Pequeños cambios en los datos de entrenamiento pueden generar grandes cambios en la lógica de decisión

Resultados en un modelo que puede interpretarse sin un fondo matemático (para árboles relativamente pequeños)

Los árboles grandes pueden ser difíciles de interpretar y las decisiones que toman pueden parecer contradictorias

Más eficiente que otros modelos complejos

11.1. Elegir la mejor partición#

Entropía

La entropía de una muestra de datos indica qué tan mezclados están los valores de clase; el valor mínimo de 0 indica que la muestra es completamente homogénea, mientras que 1 indica la cantidad máxima de desorden.

\[ Entropy(S) = \sum_{i=1}^{c}-p_ilog_2(p_i) \]

En la fórmula de entropía, para un segmento dado de datos \((S)\), el término \(c\) se refiere al número de diferentes niveles de clase, y \(p_i\) se refiere a la proporción de valores que caen en el nivel de clase \(i\). Por ejemplo, supongamos que tenemos una partición de datos con dos clases: rojo (\(60\) por ciento) y blanco (\(40\) por ciento). Podemos calcular la entropía como:

import math

# Valores
p1 = 0.60
p2 = 0.40

# Calculo
result = -p1 * math.log2(p1) - p2 * math.log2(p2)

# Resultado
print(result)
0.9709505944546686

Podemos examinar la entropía para todos los posibles arreglos de dos clases. Si sabemos que la proporción de ejemplos en una clase es \(x\), entonces la proporción en la otra clase es \(1 - x\).

Creamos la función entropy y creamos valores para x de modo que se pueda visualizar la función:

import numpy as np
import matplotlib.pyplot as plt

# Definimos la funcion entropia
def entropy(x):
    return -x * np.log2(x) - (1 - x) * np.log2(1 - x)

# Generar valores x de 0 a 1
x_values = np.linspace(0.01, 0.99, 100)

# Calculamos la entropia para cada x
y_values = entropy(x_values)

# Plot
plt.plot(x_values, y_values, color='blue', linewidth=1)
plt.xlabel('x')
plt.ylabel('Entropía')
plt.title('Función')
plt.show()
_images/529eb53cbdec27f4780c688a4ae77904c2de4e80404ef24ef7a6ae1ca804d1b9.png

Como se ilustra por el pico en entropía en \(x = 0.50\), una división \(50-50\) da como resultado la entropía máxima. A medida que una clase domina cada vez más a la otra, la entropía se reduce a cero.

Dada esta medida de pureza (como la entropía), el algoritmo aún debe decidir qué característica dividir. Para esto, el algoritmo usa la entropía para calcular el cambio en la homogeneidad resultante de una división en cada característica posible. El cálculo se conoce como ganancia de información. La ganancia de información para una característica \(F\) se calcula como la diferencia entre la entropía en el segmento antes de la división \((S_1)\) y las particiones resultantes de la división \((S_2)\)

\[ InfoGain (F) = Entropy(S_1)-Entropy(S_2) \]

Cuanto mayor sea la ganancia de información, mejor será una función para crear grupos homogéneos después de una división en esa función.

La ganancia de información no es el único criterio de división que se puede usar para construir árboles de decisión. Otros criterios comúnmente utilizados son el índice de Gini, la relación de ganancia, entre otros. Para profundizar en estos criterios revisa: link

11.2. Ejemplo: Identificando el riesgo de un préstamo#

11.2.1. Paso 1: recopilación de datos#

Los datos representan los préstamos obtenidos de una agencia de crédito en Alemania.

# Cargar los datos
url = "https://github.com/vmoprojs/DataLectures/raw/master/credit.csv"
credit = pd.read_csv(url)

El conjunto de datos crediticios incluye \(1000\) ejemplos de préstamos, más una combinación de características numéricas y nominales que indican las características del préstamo y del solicitante del préstamo.

Una variable indica si el préstamo entró en default. Veamos si podemos determinar un patrón que prediga este resultado.

11.2.2. Paso 2: Explorar y preparar los datos#

Veamos algunos de los resultados de table() para un par de características de préstamos que parecen predecir un incumplimiento. Las características checking_balance y savings_balance indican el saldo de la cuenta de cheques y de ahorros del solicitante, y se registran como variables categóricas:

# Ver estructura de los datos
print(credit.info())

# Tablas de frecuencia para balance de cuenta corriente y de ahorros
print(credit['checking_balance'].value_counts())
print(credit['savings_balance'].value_counts())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000 entries, 0 to 999
Data columns (total 17 columns):
 #   Column                Non-Null Count  Dtype 
---  ------                --------------  ----- 
 0   checking_balance      1000 non-null   object
 1   months_loan_duration  1000 non-null   int64 
 2   credit_history        1000 non-null   object
 3   purpose               1000 non-null   object
 4   amount                1000 non-null   int64 
 5   savings_balance       1000 non-null   object
 6   employment_duration   1000 non-null   object
 7   percent_of_income     1000 non-null   int64 
 8   years_at_residence    1000 non-null   int64 
 9   age                   1000 non-null   int64 
 10  other_credit          1000 non-null   object
 11  housing               1000 non-null   object
 12  existing_loans_count  1000 non-null   int64 
 13  job                   1000 non-null   object
 14  dependents            1000 non-null   int64 
 15  phone                 1000 non-null   object
 16  default               1000 non-null   object
dtypes: int64(7), object(10)
memory usage: 132.9+ KB
None
checking_balance
unknown       394
< 0 DM        274
1 - 200 DM    269
> 200 DM       63
Name: count, dtype: int64
savings_balance
< 100 DM         603
unknown          183
100 - 500 DM     103
500 - 1000 DM     63
> 1000 DM         48
Name: count, dtype: int64

Dado que los datos del préstamo se obtuvieron de Alemania, la moneda se registra en Deutsche Marks (DM). Parece una suposición válida que los saldos de cuentas corrientes y de ahorro más grandes deberían estar relacionados con una menor posibilidad de impago del préstamo.

Algunas de las funciones del préstamo son numéricas, como su plazo (months_loan_duration) y el monto de crédito solicitado (amount).

# Resumen de la duración del préstamo
print(credit['months_loan_duration'].describe())
count    1000.000000
mean       20.903000
std        12.058814
min         4.000000
25%        12.000000
50%        18.000000
75%        24.000000
max        72.000000
Name: months_loan_duration, dtype: float64

Los montos de los préstamos oscilaron entre 250 DM y 18420 DM a plazos de 4 a 72 meses, con una duración media de 18 meses y un monto de 2320 DM.

La variable default indica si el solicitante del préstamo no pudo cumplir con los términos de pago acordados y entró en incumplimiento. Un total del 30 por ciento de los préstamos entraron en mora:

# Tabla de frecuencia para el valor predicho (default)
print(credit['default'].value_counts())
default
no     700
yes    300
Name: count, dtype: int64

Una alta tasa de incumplimiento no es deseable para un banco porque significa que es poco probable que el banco recupere completamente su inversión. Si tenemos éxito, nuestro modelo identificará a los solicitantes que es probable que presenten un incumplimiento, tal que este número se pueda reducir.

Creamos el conjunto de entrenamiento y de prueba

  1. ordenar al azar su data de crédito antes de dividir.

# Barajar las filas de forma aleatoria
np.random.seed(12345)
credit_rand = credit.sample(frac=1).reset_index(drop=True)

Confirmamos que los datos no han cambiado

# Resumen del monto del préstamo
print(credit['amount'].describe())
count     1000.000000
mean      3271.258000
std       2822.736876
min        250.000000
25%       1365.500000
50%       2319.500000
75%       3972.250000
max      18424.000000
Name: amount, dtype: float64
  1. Ahora, podemos dividir los datos en entrenamiento (90 por ciento o 900 registros) y datos de prueba (10 por ciento o 100 registros)

# Dividir en conjunto de entrenamiento y prueba
credit_train = credit_rand.iloc[:900, :]
credit_test = credit_rand.iloc[900:, :]

Si todo salió bien, deberíamos tener alrededor del 30 por ciento de los préstamos impagos en cada uno de los conjuntos de datos.

# Proporciones de la variable 'default' en entrenamiento y prueba
print(credit_train['default'].value_counts(normalize=True))
print(credit_test['default'].value_counts(normalize=True))
default
no     0.707778
yes    0.292222
Name: proportion, dtype: float64
default
no     0.63
yes    0.37
Name: proportion, dtype: float64

11.2.3. Paso 3: entrenar un modelo en los datos#

La columna 17 en credit_train es la variable default, por lo que debemos excluirla del DataFrame de entrenamiento como una variable independiente, pero suministrarla como dependiente para la clasificación.

Debemos separar los datos y seleccionar las variables con las que vamos a trabajar

# Seleccionar las características y el objetivo
features = ['checking_balance', 'months_loan_duration', 'credit_history', 'purpose', 'amount', 'savings_balance', 
            'employment_duration', 'percent_of_income', 'years_at_residence', 'age', 'other_credit', 'housing', 
            'existing_loans_count', 'job', 'dependents', 'phone']
X_train = credit_train[features]
y_train = credit_train['default']
X_test = credit_test[features]
y_test = credit_test['default']

# Codificar las variables categóricas
encoder = OneHotEncoder(drop='first', sparse_output=False,handle_unknown='ignore')
X_train_encoded = encoder.fit_transform(X_train)
X_test_encoded = encoder.transform(X_test)
/Users/victormorales/opt/anaconda3/lib/python3.9/site-packages/sklearn/preprocessing/_encoders.py:227: UserWarning: Found unknown categories in columns [1, 4] during transform. These unknown categories will be encoded as all zeros
  warnings.warn(

Entrenamos el modelo:

# Entrenar el modelo de árbol de decisión
# credit_model = DecisionTreeClassifier(random_state=12345,class_weight="balanced")
credit_model = DecisionTreeClassifier(random_state=12345,criterion='entropy')
credit_model.fit(X_train_encoded, y_train)
DecisionTreeClassifier(criterion='entropy', random_state=12345)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
# Imprimir el modelo (el árbol de decisión)
plt.figure(figsize=(20,10))
plot_tree(credit_model, feature_names=encoder.get_feature_names_out(features), class_names=['no', 'yes'], filled=True)
plt.show()
---------------------------------------------------------------------------
InvalidParameterError                     Traceback (most recent call last)
Cell In[14], line 3
      1 # Imprimir el modelo (el árbol de decisión)
      2 plt.figure(figsize=(20,10))
----> 3 plot_tree(credit_model, feature_names=encoder.get_feature_names_out(features), class_names=['no', 'yes'], filled=True)
      4 plt.show()

File ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/utils/_param_validation.py:201, in validate_params.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
    198 to_ignore += ["self", "cls"]
    199 params = {k: v for k, v in params.arguments.items() if k not in to_ignore}
--> 201 validate_parameter_constraints(
    202     parameter_constraints, params, caller_name=func.__qualname__
    203 )
    205 try:
    206     with config_context(
    207         skip_parameter_validation=(
    208             prefer_skip_nested_validation or global_skip_validation
    209         )
    210     ):

File ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/utils/_param_validation.py:95, in validate_parameter_constraints(parameter_constraints, params, caller_name)
     89 else:
     90     constraints_str = (
     91         f"{', '.join([str(c) for c in constraints[:-1]])} or"
     92         f" {constraints[-1]}"
     93     )
---> 95 raise InvalidParameterError(
     96     f"The {param_name!r} parameter of {caller_name} must be"
     97     f" {constraints_str}. Got {param_val!r} instead."
     98 )

InvalidParameterError: The 'feature_names' parameter of plot_tree must be an instance of 'list' or None. Got array(['checking_balance_< 0 DM', 'checking_balance_> 200 DM',
       'checking_balance_unknown', 'months_loan_duration_5',
       'months_loan_duration_6', 'months_loan_duration_7',
       'months_loan_duration_8', 'months_loan_duration_9',
       'months_loan_duration_10', 'months_loan_duration_11',
       'months_loan_duration_12', 'months_loan_duration_13',
       'months_loan_duration_14', 'months_loan_duration_15',
       'months_loan_duration_16', 'months_loan_duration_18',
       'months_loan_duration_20', 'months_loan_duration_21',
       'months_loan_duration_22', 'months_loan_duration_24',
       'months_loan_duration_26', 'months_loan_duration_27',
       'months_loan_duration_28', 'months_loan_duration_30',
       'months_loan_duration_33', 'months_loan_duration_36',
       'months_loan_duration_39', 'months_loan_duration_40',
       'months_loan_duration_42', 'months_loan_duration_45',
       'months_loan_duration_48', 'months_loan_duration_54',
       'months_loan_duration_60', 'months_loan_duration_72',
       'credit_history_good', 'credit_history_perfect',
       'credit_history_poor', 'credit_history_very good', 'purpose_car',
       'purpose_car0', 'purpose_education',
       'purpose_furniture/appliances', 'purpose_renovations',
       'amount_338', 'amount_339', 'amount_343', 'amount_362',
       'amount_368', 'amount_385', 'amount_392', 'amount_409',
       'amount_426', 'amount_428', 'amount_433', 'amount_448',
       'amount_454', 'amount_458', 'amount_484', 'amount_518',
       'amount_522', 'amount_571', 'amount_585', 'amount_590',
       'amount_601', 'amount_609', 'amount_618', 'amount_625',
       'amount_626', 'amount_629', 'amount_639', 'amount_640',
       'amount_652', 'amount_654', 'amount_660', 'amount_662',
       'amount_666', 'amount_672', 'amount_674', 'amount_682',
       'amount_683', 'amount_684', 'amount_685', 'amount_697',
       'amount_700', 'amount_701', 'amount_707', 'amount_708',
       'amount_709', 'amount_717', 'amount_719', 'amount_727',
       'amount_730', 'amount_731', 'amount_741', 'amount_745',
       'amount_750', 'amount_753', 'amount_754', 'amount_759',
       'amount_760', 'amount_763', 'amount_766', 'amount_776',
       'amount_781', 'amount_783', 'amount_790', 'amount_795',
       'amount_797', 'amount_802', 'amount_804', 'amount_806',
       'amount_836', 'amount_841', 'amount_846', 'amount_866',
       'amount_874', 'amount_882', 'amount_884', 'amount_886',
       'amount_888', 'amount_894', 'amount_900', 'amount_902',
       'amount_907', 'amount_909', 'amount_915', 'amount_918',
       'amount_926', 'amount_929', 'amount_930', 'amount_931',
       'amount_932', 'amount_936', 'amount_937', 'amount_939',
       'amount_947', 'amount_950', 'amount_951', 'amount_958',
       'amount_959', 'amount_960', 'amount_976', 'amount_983',
       'amount_996', 'amount_999', 'amount_1007', 'amount_1024',
       'amount_1028', 'amount_1037', 'amount_1038', 'amount_1042',
       'amount_1047', 'amount_1048', 'amount_1049', 'amount_1050',
       'amount_1053', 'amount_1055', 'amount_1056', 'amount_1068',
       'amount_1076', 'amount_1082', 'amount_1092', 'amount_1098',
       'amount_1101', 'amount_1103', 'amount_1107', 'amount_1108',
       'amount_1123', 'amount_1126', 'amount_1136', 'amount_1149',
       'amount_1154', 'amount_1155', 'amount_1158', 'amount_1163',
       'amount_1168', 'amount_1169', 'amount_1175', 'amount_1185',
       'amount_1190', 'amount_1193', 'amount_1198', 'amount_1199',
       'amount_1200', 'amount_1201', 'amount_1203', 'amount_1204',
       'amount_1206', 'amount_1207', 'amount_1216', 'amount_1217',
       'amount_1221', 'amount_1224', 'amount_1225', 'amount_1231',
       'amount_1236', 'amount_1237', 'amount_1238', 'amount_1239',
       'amount_1240', 'amount_1244', 'amount_1245', 'amount_1246',
       'amount_1249', 'amount_1255', 'amount_1258', 'amount_1262',
       'amount_1264', 'amount_1271', 'amount_1275', 'amount_1278',
       'amount_1282', 'amount_1285', 'amount_1287', 'amount_1288',
       'amount_1289', 'amount_1291', 'amount_1295', 'amount_1297',
       'amount_1299', 'amount_1300', 'amount_1301', 'amount_1308',
       'amount_1309', 'amount_1311', 'amount_1313', 'amount_1316',
       'amount_1318', 'amount_1322', 'amount_1323', 'amount_1330',
       'amount_1331', 'amount_1333', 'amount_1337', 'amount_1338',
       'amount_1343', 'amount_1344', 'amount_1345', 'amount_1346',
       'amount_1347', 'amount_1352', 'amount_1355', 'amount_1361',
       'amount_1364', 'amount_1366', 'amount_1372', 'amount_1374',
       'amount_1376', 'amount_1377', 'amount_1381', 'amount_1382',
       'amount_1386', 'amount_1388', 'amount_1391', 'amount_1393',
       'amount_1402', 'amount_1403', 'amount_1409', 'amount_1410',
       'amount_1412', 'amount_1413', 'amount_1414', 'amount_1418',
       'amount_1422', 'amount_1424', 'amount_1433', 'amount_1437',
       'amount_1442', 'amount_1444', 'amount_1445', 'amount_1449',
       'amount_1453', 'amount_1455', 'amount_1459', 'amount_1469',
       'amount_1471', 'amount_1473', 'amount_1478', 'amount_1480',
       'amount_1484', 'amount_1493', 'amount_1494', 'amount_1495',
       'amount_1501', 'amount_1503', 'amount_1505', 'amount_1512',
       'amount_1514', 'amount_1516', 'amount_1520', 'amount_1521',
       'amount_1525', 'amount_1526', 'amount_1530', 'amount_1532',
       'amount_1533', 'amount_1534', 'amount_1537', 'amount_1538',
       'amount_1542', 'amount_1543', 'amount_1544', 'amount_1546',
       'amount_1549', 'amount_1552', 'amount_1553', 'amount_1554',
       'amount_1555', 'amount_1567', 'amount_1568', 'amount_1569',
       'amount_1574', 'amount_1577', 'amount_1582', 'amount_1585',
       'amount_1591', 'amount_1592', 'amount_1595', 'amount_1597',
       'amount_1602', 'amount_1647', 'amount_1657', 'amount_1659',
       'amount_1670', 'amount_1680', 'amount_1715', 'amount_1721',
       'amount_1736', 'amount_1740', 'amount_1743', 'amount_1747',
       'amount_1750', 'amount_1766', 'amount_1768', 'amount_1778',
       'amount_1795', 'amount_1797', 'amount_1800', 'amount_1808',
       'amount_1817', 'amount_1819', 'amount_1820', 'amount_1823',
       'amount_1835', 'amount_1837', 'amount_1842', 'amount_1845',
       'amount_1851', 'amount_1858', 'amount_1860', 'amount_1864',
       'amount_1867', 'amount_1872', 'amount_1880', 'amount_1881',
       'amount_1882', 'amount_1884', 'amount_1887', 'amount_1893',
       'amount_1898', 'amount_1905', 'amount_1908', 'amount_1913',
       'amount_1919', 'amount_1922', 'amount_1924', 'amount_1925',
       'amount_1927', 'amount_1928', 'amount_1934', 'amount_1935',
       'amount_1936', 'amount_1938', 'amount_1940', 'amount_1941',
       'amount_1943', 'amount_1950', 'amount_1953', 'amount_1957',
       'amount_1963', 'amount_1965', 'amount_1977', 'amount_1979',
       'amount_1980', 'amount_1984', 'amount_1987', 'amount_1995',
       'amount_2002', 'amount_2012', 'amount_2028', 'amount_2030',
       'amount_2039', 'amount_2051', 'amount_2058', 'amount_2063',
       'amount_2064', 'amount_2069', 'amount_2073', 'amount_2080',
       'amount_2096', 'amount_2100', 'amount_2101', 'amount_2108',
       'amount_2116', 'amount_2118', 'amount_2121', 'amount_2122',
       'amount_2124', 'amount_2132', 'amount_2133', 'amount_2134',
       'amount_2136', 'amount_2141', 'amount_2142', 'amount_2145',
       'amount_2146', 'amount_2149', 'amount_2150', 'amount_2169',
       'amount_2171', 'amount_2186', 'amount_2197', 'amount_2210',
       'amount_2212', 'amount_2214', 'amount_2221', 'amount_2223',
       'amount_2225', 'amount_2235', 'amount_2238', 'amount_2241',
       'amount_2246', 'amount_2247', 'amount_2249', 'amount_2251',
       'amount_2255', 'amount_2273', 'amount_2278', 'amount_2279',
       'amount_2284', 'amount_2288', 'amount_2292', 'amount_2299',
       'amount_2301', 'amount_2302', 'amount_2303', 'amount_2315',
       'amount_2319', 'amount_2320', 'amount_2323', 'amount_2325',
       'amount_2326', 'amount_2327', 'amount_2329', 'amount_2331',
       'amount_2333', 'amount_2337', 'amount_2348', 'amount_2353',
       'amount_2359', 'amount_2360', 'amount_2366', 'amount_2375',
       'amount_2384', 'amount_2389', 'amount_2390', 'amount_2394',
       'amount_2397', 'amount_2404', 'amount_2406', 'amount_2415',
       'amount_2424', 'amount_2427', 'amount_2439', 'amount_2442',
       'amount_2445', 'amount_2473', 'amount_2476', 'amount_2483',
       'amount_2503', 'amount_2511', 'amount_2515', 'amount_2520',
       'amount_2522', 'amount_2528', 'amount_2538', 'amount_2569',
       'amount_2570', 'amount_2576', 'amount_2577', 'amount_2578',
       'amount_2579', 'amount_2580', 'amount_2600', 'amount_2603',
       'amount_2606', 'amount_2611', 'amount_2613', 'amount_2622',
       'amount_2625', 'amount_2629', 'amount_2631', 'amount_2647',
       'amount_2659', 'amount_2662', 'amount_2670', 'amount_2671',
       'amount_2675', 'amount_2679', 'amount_2684', 'amount_2697',
       'amount_2708', 'amount_2712', 'amount_2718', 'amount_2728',
       'amount_2743', 'amount_2745', 'amount_2751', 'amount_2753',
       'amount_2759', 'amount_2760', 'amount_2762', 'amount_2764',
       'amount_2767', 'amount_2775', 'amount_2782', 'amount_2788',
       'amount_2812', 'amount_2820', 'amount_2825', 'amount_2828',
       'amount_2831', 'amount_2835', 'amount_2848', 'amount_2859',
       'amount_2862', 'amount_2864', 'amount_2872', 'amount_2892',
       'amount_2896', 'amount_2899', 'amount_2901', 'amount_2910',
       'amount_2923', 'amount_2924', 'amount_2957', 'amount_2964',
       'amount_2969', 'amount_2978', 'amount_2991', 'amount_2993',
       'amount_2996', 'amount_3016', 'amount_3017', 'amount_3021',
       'amount_3029', 'amount_3031', 'amount_3049', 'amount_3051',
       'amount_3059', 'amount_3060', 'amount_3062', 'amount_3069',
       'amount_3074', 'amount_3077', 'amount_3079', 'amount_3092',
       'amount_3104', 'amount_3105', 'amount_3108', 'amount_3114',
       'amount_3123', 'amount_3124', 'amount_3148', 'amount_3149',
       'amount_3160', 'amount_3161', 'amount_3181', 'amount_3190',
       'amount_3195', 'amount_3213', 'amount_3229', 'amount_3234',
       'amount_3235', 'amount_3244', 'amount_3249', 'amount_3275',
       'amount_3331', 'amount_3342', 'amount_3343', 'amount_3345',
       'amount_3349', 'amount_3357', 'amount_3368', 'amount_3378',
       'amount_3380', 'amount_3384', 'amount_3386', 'amount_3394',
       'amount_3398', 'amount_3399', 'amount_3414', 'amount_3416',
       'amount_3422', 'amount_3430', 'amount_3441', 'amount_3446',
       'amount_3447', 'amount_3448', 'amount_3485', 'amount_3488',
       'amount_3496', 'amount_3509', 'amount_3512', 'amount_3518',
       'amount_3527', 'amount_3535', 'amount_3552', 'amount_3556',
       'amount_3565', 'amount_3566', 'amount_3568', 'amount_3573',
       'amount_3578', 'amount_3590', 'amount_3594', 'amount_3595',
       'amount_3599', 'amount_3609', 'amount_3612', 'amount_3617',
       'amount_3620', 'amount_3632', 'amount_3643', 'amount_3650',
       'amount_3651', 'amount_3652', 'amount_3656', 'amount_3660',
       'amount_3676', 'amount_3711', 'amount_3749', 'amount_3757',
       'amount_3758', 'amount_3763', 'amount_3777', 'amount_3780',
       'amount_3804', 'amount_3812', 'amount_3832', 'amount_3835',
       'amount_3844', 'amount_3850', 'amount_3857', 'amount_3863',
       'amount_3868', 'amount_3872', 'amount_3878', 'amount_3905',
       'amount_3913', 'amount_3914', 'amount_3915', 'amount_3939',
       'amount_3959', 'amount_3966', 'amount_3972', 'amount_3973',
       'amount_3976', 'amount_3979', 'amount_3990', 'amount_4006',
       'amount_4020', 'amount_4042', 'amount_4057', 'amount_4110',
       'amount_4139', 'amount_4151', 'amount_4153', 'amount_4165',
       'amount_4169', 'amount_4210', 'amount_4221', 'amount_4241',
       'amount_4249', 'amount_4272', 'amount_4280', 'amount_4297',
       'amount_4308', 'amount_4351', 'amount_4370', 'amount_4380',
       'amount_4439', 'amount_4455', 'amount_4473', 'amount_4526',
       'amount_4530', 'amount_4576', 'amount_4583', 'amount_4591',
       'amount_4594', 'amount_4605', 'amount_4611', 'amount_4623',
       'amount_4657', 'amount_4675', 'amount_4679', 'amount_4686',
       'amount_4712', 'amount_4716', 'amount_4736', 'amount_4746',
       'amount_4788', 'amount_4795', 'amount_4796', 'amount_4811',
       'amount_4817', 'amount_4843', 'amount_4844', 'amount_4870',
       'amount_4933', 'amount_5003', 'amount_5084', 'amount_5096',
       'amount_5117', 'amount_5129', 'amount_5150', 'amount_5152',
       'amount_5179', 'amount_5190', 'amount_5234', 'amount_5248',
       'amount_5293', 'amount_5302', 'amount_5324', 'amount_5371',
       'amount_5381', 'amount_5433', 'amount_5493', 'amount_5507',
       'amount_5511', 'amount_5595', 'amount_5711', 'amount_5742',
       'amount_5743', 'amount_5771', 'amount_5800', 'amount_5801',
       'amount_5804', 'amount_5842', 'amount_5848', 'amount_5866',
       'amount_5943', 'amount_5951', 'amount_5954', 'amount_5965',
       'amount_5998', 'amount_6143', 'amount_6148', 'amount_6204',
       'amount_6224', 'amount_6260', 'amount_6288', 'amount_6289',
       'amount_6304', 'amount_6313', 'amount_6314', 'amount_6350',
       'amount_6361', 'amount_6416', 'amount_6419', 'amount_6458',
       'amount_6468', 'amount_6527', 'amount_6560', 'amount_6579',
       'amount_6614', 'amount_6615', 'amount_6681', 'amount_6742',
       'amount_6758', 'amount_6761', 'amount_6842', 'amount_6850',
       'amount_6872', 'amount_6887', 'amount_6948', 'amount_6967',
       'amount_6999', 'amount_7057', 'amount_7119', 'amount_7127',
       'amount_7166', 'amount_7174', 'amount_7228', 'amount_7238',
       'amount_7253', 'amount_7297', 'amount_7308', 'amount_7393',
       'amount_7408', 'amount_7409', 'amount_7418', 'amount_7432',
       'amount_7472', 'amount_7485', 'amount_7511', 'amount_7596',
       'amount_7678', 'amount_7685', 'amount_7758', 'amount_7763',
       'amount_7814', 'amount_7824', 'amount_7855', 'amount_7882',
       'amount_7966', 'amount_8065', 'amount_8072', 'amount_8086',
       'amount_8133', 'amount_8229', 'amount_8318', 'amount_8335',
       'amount_8358', 'amount_8386', 'amount_8471', 'amount_8487',
       'amount_8588', 'amount_8613', 'amount_8648', 'amount_8858',
       'amount_8947', 'amount_8978', 'amount_9034', 'amount_9157',
       'amount_9271', 'amount_9277', 'amount_9283', 'amount_9398',
       'amount_9436', 'amount_9566', 'amount_9572', 'amount_9629',
       'amount_9960', 'amount_10127', 'amount_10144', 'amount_10222',
       'amount_10297', 'amount_10366', 'amount_10477', 'amount_10623',
       'amount_10875', 'amount_10974', 'amount_11054', 'amount_11328',
       'amount_11560', 'amount_11590', 'amount_11760', 'amount_11816',
       'amount_11998', 'amount_12169', 'amount_12204', 'amount_12389',
       'amount_12579', 'amount_12612', 'amount_12749', 'amount_12976',
       'amount_13756', 'amount_14027', 'amount_14179', 'amount_14318',
       'amount_14555', 'amount_14896', 'amount_15653', 'amount_15672',
       'amount_15857', 'amount_15945', 'amount_18424',
       'savings_balance_500 - 1000 DM', 'savings_balance_< 100 DM',
       'savings_balance_> 1000 DM', 'savings_balance_unknown',
       'employment_duration_4 - 7 years', 'employment_duration_< 1 year',
       'employment_duration_> 7 years', 'employment_duration_unemployed',
       'percent_of_income_2', 'percent_of_income_3',
       'percent_of_income_4', 'years_at_residence_2',
       'years_at_residence_3', 'years_at_residence_4', 'age_20', 'age_21',
       'age_22', 'age_23', 'age_24', 'age_25', 'age_26', 'age_27',
       'age_28', 'age_29', 'age_30', 'age_31', 'age_32', 'age_33',
       'age_34', 'age_35', 'age_36', 'age_37', 'age_38', 'age_39',
       'age_40', 'age_41', 'age_42', 'age_43', 'age_44', 'age_45',
       'age_46', 'age_47', 'age_48', 'age_49', 'age_50', 'age_51',
       'age_52', 'age_53', 'age_54', 'age_55', 'age_56', 'age_57',
       'age_58', 'age_59', 'age_60', 'age_61', 'age_62', 'age_63',
       'age_64', 'age_65', 'age_66', 'age_67', 'age_68', 'age_70',
       'age_74', 'age_75', 'other_credit_none', 'other_credit_store',
       'housing_own', 'housing_rent', 'existing_loans_count_2',
       'existing_loans_count_3', 'existing_loans_count_4', 'job_skilled',
       'job_unemployed', 'job_unskilled', 'dependents_2', 'phone_yes'],
      dtype=object) instead.
<Figure size 2000x1000 with 0 Axes>

11.2.4. Paso 4: evaluar el rendimiento del modelo#

# Predecir con el conjunto de prueba
credit_pred = credit_model.predict(X_test_encoded)
# Matriz de confusión y reportes de clasificación
conf_matrix = confusion_matrix(y_test, credit_pred)
class_report = classification_report(y_test, credit_pred)
print(conf_matrix)
print(class_report)
[[45 18]
 [21 16]]
              precision    recall  f1-score   support

          no       0.68      0.71      0.70        63
         yes       0.47      0.43      0.45        37

    accuracy                           0.61       100
   macro avg       0.58      0.57      0.57       100
weighted avg       0.60      0.61      0.61       100
# Proporciones de la matriz de confusión
conf_matrix_norm1 = conf_matrix.astype('float') / conf_matrix.sum(axis=1)[:, np.newaxis]
conf_matrix_norm2 = conf_matrix.astype('float') / conf_matrix.sum(axis=0)[np.newaxis, :]
print(conf_matrix_norm1)
print(conf_matrix_norm2)
[[0.71428571 0.28571429]
 [0.56756757 0.43243243]]
[[0.68181818 0.52941176]
 [0.31818182 0.47058824]]

11.3. Mejorando la estimación#

La definición de la medida de complejidad-costo (cost-complexity measure):

Para cualquier subárbol \(T < T_{max}\), definiremos su complejidad como \(|\tilde{T}|\), el número de nodos terminales u hoja en \(T\) . Sea \(\alpha ≥ 0\) un número real llamado parámetro de complejidad y defina la medida de costo-complejidad \(R_{\alpha}(T)\) como:

\[ R_{\alpha}(T)=R(T) +\alpha| \tilde{T}| \]

Cuantos más nodos de hoja (nodos sin hijos) contenga el árbol, mayor será su complejidad porque tenemos más flexibilidad para dividir el espacio en partes más pequeñas y, por lo tanto, más posibilidades para ajustar los datos de entrenamiento. También está la cuestión de cuánta importancia darle al tamaño del árbol. El parámetro de complejidad \(\alpha\) ajusta eso.

Al final, la medida de la complejidad-costo surge como una versión penalizada de la tasa de error de resustitución. Esta es la función a minimizar al podar el árbol.

Qué árbol seleccionar a final depende de \(\alpha\). Si \(\alpha=0\) entonces el árbol más grande será elegido porque el término de complejidad se anula. Si \(\alpha \to \infty\), se elige un solo nodo.

# Obtener los valores de ccp_alpha para la poda
path = credit_model.cost_complexity_pruning_path(X_train_encoded, y_train)
ccp_alphas, impurities = path.ccp_alphas, path.impurities
# Entrenar árboles de decisión con diferentes valores de ccp_alpha
clfs = []
for ccp_alpha in ccp_alphas:
    # clf = DecisionTreeClassifier(random_state=12345, ccp_alpha=ccp_alpha,class_weight="balanced")
    clf = DecisionTreeClassifier(random_state=12345, ccp_alpha=ccp_alpha)
    clf.fit(X_train_encoded, y_train)
    clfs.append(clf)
# Seleccionar el mejor árbol basado en la precisión de validación cruzada
train_scores = [clf.score(X_train_encoded, y_train) for clf in clfs]
test_scores = [clf.score(X_test_encoded, y_test) for clf in clfs]
# Graficar la precisión del entrenamiento y la prueba en función de ccp_alpha
plt.figure(figsize=(10, 6))
plt.plot(ccp_alphas, train_scores, marker='o', label='train', drawstyle="steps-post")
plt.plot(ccp_alphas, test_scores, marker='o', label='test', drawstyle="steps-post")
plt.xlabel("ccp_alpha")
plt.ylabel("accuracy")
plt.legend()
plt.title("Accuracy vs ccp_alpha")
plt.show()
_images/0602b2a86a3116c9b92451cd5fcb5bba61de3d258ef3e9ab6fe4579604b800fb.png
# Seleccionar el mejor modelo basado en el rendimiento de la prueba
best_index = np.argmax(test_scores)
best_ccp_alpha = ccp_alphas[best_index]
best_model = clfs[best_index]
# Imprimir el mejor modelo (el árbol de decisión podado)
plt.figure(figsize=(20,10))
plot_tree(best_model, feature_names=encoder.get_feature_names_out(features), class_names=['no', 'yes'], filled=True)
plt.show()
_images/d374fdc2e0bca9bb494f14fc8f4509b2bbbbd844620554cf1056cdec0cd661ed.png
# Predecir con el conjunto de prueba usando el mejor modelo
credit_pred = best_model.predict(X_test_encoded)
# Matriz de confusión y reportes de clasificación
conf_matrix = confusion_matrix(y_test, credit_pred)
class_report = classification_report(y_test, credit_pred)
print(conf_matrix)
print(class_report)
[[62  1]
 [34  3]]
              precision    recall  f1-score   support

          no       0.65      0.98      0.78        63
         yes       0.75      0.08      0.15        37

    accuracy                           0.65       100
   macro avg       0.70      0.53      0.46       100
weighted avg       0.68      0.65      0.55       100
# Proporciones de la matriz de confusión
conf_matrix_norm1 = conf_matrix.astype('float') / conf_matrix.sum(axis=1)[:, np.newaxis]
conf_matrix_norm2 = conf_matrix.astype('float') / conf_matrix.sum(axis=0)[np.newaxis, :]

print(conf_matrix_norm1)
print(conf_matrix_norm2)
[[0.98412698 0.01587302]
 [0.91891892 0.08108108]]
[[0.64583333 0.25      ]
 [0.35416667 0.75      ]]